by Prayson Pate, CTO, Edge Cloud, ADVA
Security is one of the biggest concerns about cloud computing. And securing the cloud means stopping intruders at the door by securing its onramp – the edge. How can edge cloud can be securely deployed, automatically, at scale, over public internet?
The bad news is that it’s impossible to be 100% secure, especially when you bring internet threats into the mix.
The good news is that we can make it so difficult for intruders that they move on to easier targets. And we can ensure that we contain and limit the damage if they do get in.
To achieve that requires an automated and layered approach. Automation ensures that policies are up to date, passwords and keys are rotated, and patches and updates are applied. Layering means that breaching one barrier does not give the intruder the keys to the kingdom. Finally, security must be designed in – not tacked on as an afterthought.
Let’s take a closer look at what edge cloud is, and how we can build and deliver it, securely and at scale.
Defining and building the edge cloud
Before we continue with the security discussion, let’s talk about what we mean by edge cloud.
Edge cloud is the delivery of cloud resources (compute, networking, and storage) to the perimeter of the network and the usage of those resources for both standard compute loads (micro-cloud) as well as for communications infrastructure (uCPE, SD-WAN, MEC, etc.), as shown below.
One of the knocks against OpenStack is its heavy footprint. A standard data center deployment for OpenStack includes one or more servers for the OpenStack controller, with OpenStack agents running on each of the managed nodes.
It’s possible to optimize this model for edge cloud by slimming down the OpenStack controller and running it the same node as the managed resources. In this model, all the cloud resources – compute, storage, networking and control – reside in the same physical device. In other words, it’s a “cloud in a box.” This is a great model for edge cloud, and gives us the benefits of a standard cloud model in a small footprint.
Security out of the box
Security at an edge cloud starts when the hosting device or server is installed and initialized. We believe that the best way to accomplish this is with secure zero-touch provisioning (ZTP) of the device over public IP.
The process starts when an unconfigured server is delivered to an end user. Separately, the service provider sends a digital key to the end user. The end user powers up the server and enters the digital key. The edge cloud software builds a secure tunnel from the customer site to the ZTP server, and delivers the security key to identify and authenticate the edge cloud deployment. This step is essential to prevent unauthorized access if the hosting server is delivered to the wrong location. At that point, the site-specific configuration can be applied using the secure tunnel.
The secure tunnel doesn’t go away once the ZTP process completes. The management and orchestration (MANO) software uses the management channel for ongoing control and monitoring of the edge cloud. This approach provides security even when the connectivity is over public IP.
Security on the edge cloud
One possible drawback to the distributed compute resources and interface in an edge cloud model is an increased attack surface for hackers. We must defend edge cloud nodes with layered security at the device, including:
• Application layer – software-based encryption of data plane traffic at Layers 2, 3, or 4 as part of platform, with the addition of third-party firewall/UTM as a part of the service chain
• Management layer – two-factor authentication at customer site with encryption of management and user tunnels
• Virtualization layer – safeguard against VM escape (protecting one VM from another, and prevention of rogue management system connectivity to hypervisor) and VNF attestation via checksum validation
• Network layer – Modern encryption along with Layer 2 and Layer 3 protocols and micro-segmentation to separate management traffic from user traffic, and to protect both
Security of the management software
Effective automation of edge cloud deployments requires sophisticated MANO software, including the ZTP machinery. All of this software must be able to communicate with the managed edge cloud nodes, and do so securely. This means the use of modern security gateways to both protect the MANO software, as well as to provide the secure management tunnels for connectivity.
But that’s not enough. The MANO software should support scalable deployments and tenancy. Scalability should be built using modern techniques so that tools like load balancers can be used to support scaleout. Tenancy is a useful tool to separate customers or regions and to contain security breaches.
Security is an ongoing process
Hackers aren’t standing still, and neither can we. We must perform ongoing security scans of the software to ensure that vulnerabilities are not introduced. We must also monitor the open source distributions and apply patches as needed. A complete model would include:
• Automated source code verification by tools such as Protecode and Black Duck
• Automated functional verification by tools such as Nessus and OpenSCAP
• Monitoring of vulnerability within open source components such as Linux and OpenStack
• Following recommendations from the OpenStack Security Group (OSSG) to identify security vulnerabilities and required patches
• Application of patches and updates as needed
Build out the cloud, but secure it
The move to the cloud means embracing multi-cloud models, and that should include edge cloud deployments for optimization of application deployment. But ensuring security at those distributed edge cloud nodes means applying a security in an automated and layered approach. There are tools and methods to realize this approach, but it takes discipline and dedication to do so.