Wednesday, June 3, 2015

Blueprint: Enabling Smart Software Defined Networks

by Seong Kim, System Architect in AMD’s Embedded Networking Division

The networking and communications industry is at a critical inflection point as it looks to embrace new technologies such as software-defined networking (SDN) and network function virtualization (NFV). While there are significant advantages to deploying a software-defined network, there are challenges as well. The implementation of SDN and NFV requires revamping network components and structures, and adopting new approaches to writing software for network management function.

The hosting of SDN and NFV middleware and network management software on industry-standard processors is now being handled by modern multi-processor heterogeneous system architectures that incorporate both CPU and GPU resources within a single SOC.

What’s been missing until recently is a holistic view of networks and the technology providing a standardized separation of the control and data planes. SDN provides this capability, and can efficiently enable data center and service providers to manage network configuration, management, routing and policy enforcement for their evolving multi-tenant heterogeneous networks.

As defined by the Open Networking Foundation, SDN decouples the network control and forwarding functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services.
Unlike server virtualization, which enables sharing of a single physical resource by many users or entities, virtualizing network resources enables a consolidation of different physical resources by overlaying virtual layers of networks on heterogeneous networks, resulting in a unified, logically homogenous network. Figure 1 describes three requirements that commonly define SDN architecture.

SDN Trends and Challenges

There are several different SDN deployment scenarios in the industry, although the original SDN concept proposes to have a centralized control plane with only the data plane remaining in the network.

On the controller implementation, three basic topologies are being considered in the industry. The first is a centralized topology where one SDN controller controls all the switches in the network. This approach, however, incurs a higher risk of failure since it makes the central controller a single point of failure for the network. The second topology being investigated is the so-called distributed-centralized architecture. In this approach multiple “regional” SDN controllers, each controlling a subset of the network, communicate with the (global) central controller. This architecture eliminates single points of failure since one controller can take over the function of a failed controller. Finally, Orion  proposes a hierarchical topology that may provide better network scalability.

Apart from the controller, the data plane can also become a challenge with the transition to SDN, because traditional switching and/or forwarding devices/ASICs will not be able to easily support SDN traffic due to evolving standards. Hence the need to have a hybrid approach. Specifically, a portion of the network (e.g., the access network) can be SDN enabled while the other portion (e.g., the core network) can remain as a ‘traditional’ network . Thus traditional platforms are located in the intermediate nodes, acting as a big pipe, and SDN-enabled platforms serve as the switch and routing platforms. With this approach, an SDN network may be enabled immediately without requiring the overhaul of the entire network.

Challenges in SDN are still emerging, as the definition of SDN continues to evolve. The scale-out network paradigm is evolving as well. Due to these uncertainties, abstraction mechanisms from different vendors will compete or co-exist. In addition, creation of SDN controllers and switches requires resolution of design challenges in many hardware platforms.

The data center environment is the most common use case for SDN. In the traditional data center network, there are ToR (Top of Rack), EoR (End of Row), aggregation and core switches. Multi-tier networking is a common configuration. To increase data center network manageability, SDN can abstract physical elements and represent them as logical elements using software. It treats all network elements as one large resource across multiple network segments. Therefore it can provide complete visibility of the network and manage policies across network nodes connected to virtual and physical switches.

Figure 2 shows a traditional multi-tier data center network and how an SDN controller can manage the entire network from a centralized location.

SDN’s basic tenet is to remove vendor-specific dependencies, reduce complexity and improve control, allowing the network to quickly adapt to changes in business needs. Other key SDN requirements are the disaggregation of control and data planes, and the integration of strong compute and packet processing capabilities. Companies are now collaborating to demonstrate the feasibility of a complete SDN solution utilizing the unique compute capabilities and power efficiency of heterogeneous, general purpose processors.

Software Enablement for SDN

One such demonstration of the integration needed to enable SDN is an ETSI NFV proof-of-concept. In this proof of concept, several companies demonstrated the integration of a Data Plane Development Kit (DPDK) on an x86 platform and Open Data Plane (ODP) on an ARM-based platform running OpenStack. The DPDK and ODP middleware enables fast packet I/O for general purpose CPU platforms eliminating the typical bottleneck in the data path when there is no user space pass-through enablement. This middleware software is a must-have to enable an SDN solution, providing a unified interface to various platforms including x86 and ARM64 platforms.

High Compute Power at a Low Power Envelope

An SDN controller needs to have strong compute capability to handle large amounts of control traffic coming from many SDN switches – each individual flow needs handling by the central SDN controller. This brings concerns regarding the SDN controller in terms of performance and single point of failure.

There are different architectures proposed in the industry to mitigate the load on the central controller. One example is a distributed-centralized controller which has several SDN controllers, each managing a subsection of the network, with an additional control layer managing these regional controllers. This architecture requires smart, distributed and powerful compute capabilities throughout the entire network of SDN controllers. Different nodes, including SDN switch nodes, require different levels of performance and power. SDN implementations benefit from vendor platforms that offer a range of performance capabilities, matching the appropriate level of resources at the necessary point in the network design.

Security Enhancements

There is a growing need for security, and as the amount of control traffic increases, the needs of crypto acceleration or offload increase together. By offloading crypto operation to acceleration engines such as CCP (Crypto Co-processor) on a CPU or GPU, the system level performance can be maintained without compromising compute performance.

Deep Packet Inspection (DPI) - Understanting Network Traffic Flow

In order for an SDN controller to manage the network and associated policies, it requires a good ‘understanding’ of networking traffic. Centralized or distributed SDN architectures can support a deep understanding of traffic by collecting sets of packets from a traffic flow and analyzing them. There are two different ways to support this requirement.

Option 1—Based on the assumption of having a big pipe/channel between SDN switches and SDN controller, all of the deep packet inspection or application recognition can be done in the central controller with a powerful DPI engine.

Option 2—A small DPI engine can be implemented in the distributed SDN switches. These switches perform a basic deep packet inspection, then report the results or send only streams of important traffic. As we have seen, the latter case requires cheaper and simpler implementation to meet the basic SDN tenet.

Low cost and low power processors can be used for DPI applications. The combination of CPUs and GPUs as found in heterogeneous architectures, the latter being highly optimized for highly parallel programmable applications, provides a significant performance advantage.

I/O Integration

The main processor for SDN requires high speed I/O interfaces, for example, embedded network interfaces such as 1G, 10GE, and PCIe. This can lower system cost and ease system design complexity.

Software

Complicating the development of new SDN solutions is the continuing evolution of standards. Throughout the industry, there are different approaches to enabling network virtualization (for example, VXLAN and NVGRE), and these standards continue to evolve as they move to the next phases. In order to meet the requirements of these evolving standards – and any emerging network overlaying protocols – platforms must be able to provide flexibility and ease of programmability. As an example, the transition from the OpenFlow1.0 spec to the OpenFlow revision 1.3 significantly increased complexity as it aimed to support many types of networking functions and protocols.

Platform Needs

Modern heterogeneous compute platforms contain the following three major function blocks:
General purpose, programmable scalar (CPU) and vector processing cores (GPU)
High-performance bus
Common, low-latency memory model

Leading heterogeneous designs are critical to maximizing throughput. For example, on AMD platforms incorporating Heterogeneous Systems Architecture (HSA), the CPU hands over only the pointers of the data blocks to the GPU. The GPU takes the pointers and processes the data block in the specific memory location and hands them back to the CPU. HSA ensures cache coherency between the CPU and the GPU. Figure 3 depicts an overview of this architecture.

 GPUs are extremely efficient for parallel processing applications, and they can also be used for crypto operations, DPI, classification, compression and other applications. In the case of crypto operations, the CPU doesn’t have to get involved in the data plane crypto operation directly. With this architecture, the system level performance can be maintained even when the amount of traffic needing encryption or decryption increases. In a heterogeneous capable processor, software can selectively accelerate or offload CPU compute-intensive operations to the GPU. Here are a few additional functions that can be accelerated or offloaded to the GPU:

DPI: Implement RegEx engine
Security (such as IPSec) operations: RSA, crypto operation
Compression operation for distributed storage applications


 Figure  4 shows a number of different networking use cases and examples of where different levels of embedded processors integrate into the solution.

Conclusion

SDN introduces a new approach to network resource utilization and management, and each networking vendor in the market is looking for its own way to build SDN solutions. One key action that needs to be taken to enable SDN is to open up the intelligence of switches and routers to enable the abstraction of proprietary vendor technologies.

Mega data center players (Amazon, Google, Facebook and the like) are implementing technologies that will allow them to enable greater flexibility and lower costs. Amazon and Google are building their own networking (white box) switches so that they don’t have to rely on the platforms produced by OEM vendors. Facebook is driving the Open Compute Platform (OCP) to develop specifications for open architecture switches that will be manufactured by low-cost original design manufacturers (ODMs) . The open architecture approach from Facebook is creating an ecosystem where standard, high volume commodity platforms could be used to minimize CAPEX and OPEX costs.

SDN will drive the industry toward a more software-centric architecture and implementation. Thus, in this environment, OEMs find it more difficult to provide platform differentiators. With SDN, the need for less expensive and easy-to-access hardware becomes paramount, and platform-specific, value-added services is deprioritized.

About the Author

Seong Kim is currently a system architect in AMD’s Embedded Networking Division. He has more than 15 years of experience in networking systems architecture and technical marketing. His recent initiatives include NFV, SDN, Server virtualization, wireless communication networking, and security and threat management solutions. Dr. Kim’s work has been published in numerous publications including IEEE communications and Elsevier magazines, and has presented at several industry conferences and webinars. He has several US patents and US patents pending in the field of networking. Kim holds a Ph.D. in Electrical Engineering from State University of New York (SBU) and an M.B.A degree from Lehigh University.



Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Cisco to Acquire Piston Cloud for OpenStack

Cisco agreed to acquire Piston Cloud Computing, a start-up based in San Francisco, for its enterprise OpenStack solutions. Financial terms were not disclosed.

Piston Enterprise OpenStack is designed for building, scaling and managing a private Infrastructure-as-a-Service (IaaS) cloud on bare-metal, converged commodity hardware.  Piston Cloud enables Cloud Foundry's Platform-as-a-Service (PaaS) offering to run on OpenStack. It also supports leading automation solutions, including Opscode, Puppet Labs and RightScale.

Cisco said the acquisition will help advance its Intercloud, which is a globally connected network of clouds being built with its partners.

"The acquisition of Piston will complement our Intercloud strategy by bringing additional operational experience on the underlying infrastructure that powers Cisco OpenStack Private Cloud. Additionally, Piston’s deep knowledge of distributed systems and automated deployment will help further enhance our delivery capabilities for customers and partners," stated Cisco's Hilton Romanski in a blog posting.

http://blogs.cisco.com/news/cisco-announces-intent-to-acquire-piston
http://pistoncloud.com/

IBM Acquires Blue Box for OpenStack Cloud Migration

IBM has acquired Blue Box Group, a managed private cloud provider built on OpenStack. Financial terms were not disclosed.

Blue Box, which is based in Seattle, provides a private cloud as a service platform designed to enable easier deployment of workloads across hybrid cloud environments.

IBM said the acquisition reinforces its commitment to deliver flexible cloud computing models that make it easier for customers to move to data and applications across clouds and meets their needs across public, private and hybrid cloud environments. Blue Box also strengthens IBM Cloud’s existing OpenStack portfolio, with the introduction of a remotely managed OpenStack offering to provide clients with a local cloud and increased visibility, control and security.

“IBM is dedicated to helping our clients migrate to the cloud in an open, secure, data rich environment that meet their current and future business needs,” said IBM General Manager of Cloud Services Jim Comfort. “The acquisition of Blue Box accelerates IBM’s open cloud strategy making it easier for our clients to move to data and applications across clouds and adopt hybrid cloud environments."

http://www.ibm.com/cloud
https://www.blueboxcloud.com/

HP Advances its Helion CloudSystem for Multiple Clouds

HP rolled out the latest version of its flagship integrated enterprise cloud solution, Helion CloudSystem 9.0, expanding support for multiple hypervisors and multiple clouds.

HP Helion CloudSystem 9.0 integrates HP Helion OpenStack and the HP Helion Development Platform to provide customers an enterprise grade open source platform for cloud native application development and infrastructure. Some highlights of HP Helion CloudSystem 9.0:

  • Simultaneous support for multiple cloud environments, including Amazon Web Services (AWS), Microsoft Azure, HP Helion Public Cloud, OpenStack technology and VMware, with the ability to fully control where workloads reside
  • The latest release of HP Helion OpenStack, exposing OpenStack software APIs to simplify and speed development and integration with other clouds and offering developer-friendly add-ons with the HP Helion Development Platform based on Cloud Foundry
  • Support for multiple hypervisors, now including Microsoft Hyper-V, Red Hat KVM, VMware vSphere, as well as bare metal deployments, offering customers additional choice and avoiding vendor lock-in
  • Support for AWS-compatible private clouds through integration with HP Helion Eucalyptus, giving customers the flexibility to deploy existing AWS workloads onto clouds they control
  • Support for unstructured data through the Swift OpenStack Object Storage project

"Enterprise customers have a range of needs in moving to the cloud -- some need to cloud-enable traditional workloads, while others seek to build next generation 'cloud native' apps using modern technologies like OpenStack, Cloud Foundry and Docker," said Bill Hilf, senior vice president, HP Helion Product and Service Management. "The expanded support for multiple hypervisors and cloud environments in HP Helion CloudSystem 9.0 gives enterprises and service providers added flexibility to gain cloud benefits for their existing and new applications."

http://www8.hp.com/us/en/cloud/helion-overview.html

  • HP is currently operating 85 data centers worldwide.

DE-CIX Notes Upturn in 100G Internet Exchange Connections

DE-CIX, which recently recorded an all-time record peak throughput volume of 4 Terabit/s on its Internet Exchange in Frankfurt, is seeing customers upgrade their port size and capacity.  The company reports that during the first quarter of 2015, customers ordered the same number of 100 Gigabit Ethernet (100 GE) ports as they did in all of 2014.

As an example, Akamai, the global leader in Content Delivery Network (CDN) services, has upgraded its capacity at DE-CIX in Frankfurt to 12x100 GE connections. Delivering 1.2 Terabits per second, this is the largest service provider bandwidth at any Internet exchange worldwide.

“Our goal is for our customers’ content to arrive as quickly as possible at its final destination, no matter where in the world the end user is,” says Noam Freedman, Senior Vice President, Networks and Chief Network Architect at Akamai Technologies. “With more than 700 Internet service providers worldwide, DE-CIX is one of our most important Internet exchange providers. With the 2013  implementation of its DE-CIX Apollon technology platform to easily handle 100 GE demands, DE-CIX has made it very easy for us to expand our capacity there. In this way, we’re well-positioned for the growing IP traffic volumes around the world.”

http://www.de-cix.net


Movimento Launches Automotive OTA Platform with Diagnostics

Movimento Group, which specializes in automotive software, introduced an Over-the-Air (OTA) Platform for automated updates of car electronics, remote vehicle diagnostics and in-vehicle cybersecurity.

Movimento said its OTA delivers multiple benefits to automotive OEMs and Tier-1 module manufacturers as well as to vehicle owners, including legacy vehicles. The Movimento OTA platform is able to intelligently assess vehicle status before installing software updates, ensuring that data is transferred only when it is safe to reliably do so. This special bi-directional data-gathering capability reports vehicle diagnostics, prognostics and enables preventative analytics, which lets car makers react in real time to their customers and can provide data to third-party companies for insurance and other purposes.  The company also noted that it is participating in European trials for self-parking cars.

"The driverless cars of tomorrow are unbelievably complex but they are definitely coming," explained Movimento Group CEO Ben Hoffman.  "A Boeing Dreamliner 787 now has 15 million lines of code but there will be ten times more in autonomous cars when they arrive."

http://movimentogroup.com/

Ericsson Refines its Adaptive Inventory OSS/BSS

Ericsson released the latest version of its Adaptive Inventory 9.3, adding new capabilities to help operators speed cloud deployments, virtualize network functions and grapple with surging data traffic.

Ericsson Adaptive Inventory, formerly Ericsson Granite Inventory, uses data from a broad range of available sources to offer a view of a network at any given moment, including past, present and future configurations. It features an enhanced Unified Inventory Engine, intuitive web interface, component-driven design automation, and system extension kit. Ericsson has created a migration path to Ericsson Adaptive Inventory for existing Ericsson Granite Inventory customers.

“Access to accurate network data positions operators for better decision-making, including the ability to predict the future-state network from a combination of current and proposed network plans.  The latest tools that Ericsson Adaptive Inventory brings to market are easy use, deploy and maintain, helping operators thrive in an ever-changing environment,” stated Elisabetta Romano, VP Head of OSS & Service Enablement.

http://www.ericsson.com