by Seong Kim, System Architect in AMD’s Embedded Networking Division
The networking and communications industry is at a critical inflection point as it looks to embrace new technologies such as software-defined networking (SDN) and network function virtualization (NFV). While there are significant advantages to deploying a software-defined network, there are challenges as well. The implementation of SDN and NFV requires revamping network components and structures, and adopting new approaches to writing software for network management function.
The hosting of SDN and NFV middleware and network management software on industry-standard processors is now being handled by modern multi-processor heterogeneous system architectures that incorporate both CPU and GPU resources within a single SOC.
What’s been missing until recently is a holistic view of networks and the technology providing a standardized separation of the control and data planes. SDN provides this capability, and can efficiently enable data center and service providers to manage network configuration, management, routing and policy enforcement for their evolving multi-tenant heterogeneous networks.
As defined by the Open Networking Foundation, SDN decouples the network control and forwarding functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services.
Unlike server virtualization, which enables sharing of a single physical resource by many users or entities, virtualizing network resources enables a consolidation of different physical resources by overlaying virtual layers of networks on heterogeneous networks, resulting in a unified, logically homogenous network. Figure 1 describes three requirements that commonly define SDN architecture.
SDN Trends and Challenges
There are several different SDN deployment scenarios in the industry, although the original SDN concept proposes to have a centralized control plane with only the data plane remaining in the network.
On the controller implementation, three basic topologies are being considered in the industry. The first is a centralized topology where one SDN controller controls all the switches in the network. This approach, however, incurs a higher risk of failure since it makes the central controller a single point of failure for the network. The second topology being investigated is the so-called distributed-centralized architecture. In this approach multiple “regional” SDN controllers, each controlling a subset of the network, communicate with the (global) central controller. This architecture eliminates single points of failure since one controller can take over the function of a failed controller. Finally, Orion proposes a hierarchical topology that may provide better network scalability.
Apart from the controller, the data plane can also become a challenge with the transition to SDN, because traditional switching and/or forwarding devices/ASICs will not be able to easily support SDN traffic due to evolving standards. Hence the need to have a hybrid approach. Specifically, a portion of the network (e.g., the access network) can be SDN enabled while the other portion (e.g., the core network) can remain as a ‘traditional’ network . Thus traditional platforms are located in the intermediate nodes, acting as a big pipe, and SDN-enabled platforms serve as the switch and routing platforms. With this approach, an SDN network may be enabled immediately without requiring the overhaul of the entire network.
Challenges in SDN are still emerging, as the definition of SDN continues to evolve. The scale-out network paradigm is evolving as well. Due to these uncertainties, abstraction mechanisms from different vendors will compete or co-exist. In addition, creation of SDN controllers and switches requires resolution of design challenges in many hardware platforms.
The data center environment is the most common use case for SDN. In the traditional data center network, there are ToR (Top of Rack), EoR (End of Row), aggregation and core switches. Multi-tier networking is a common configuration. To increase data center network manageability, SDN can abstract physical elements and represent them as logical elements using software. It treats all network elements as one large resource across multiple network segments. Therefore it can provide complete visibility of the network and manage policies across network nodes connected to virtual and physical switches.
Figure 2 shows a traditional multi-tier data center network and how an SDN controller can manage the entire network from a centralized location.
SDN’s basic tenet is to remove vendor-specific dependencies, reduce complexity and improve control, allowing the network to quickly adapt to changes in business needs. Other key SDN requirements are the disaggregation of control and data planes, and the integration of strong compute and packet processing capabilities. Companies are now collaborating to demonstrate the feasibility of a complete SDN solution utilizing the unique compute capabilities and power efficiency of heterogeneous, general purpose processors.
Software Enablement for SDN
One such demonstration of the integration needed to enable SDN is an ETSI NFV proof-of-concept. In this proof of concept, several companies demonstrated the integration of a Data Plane Development Kit (DPDK) on an x86 platform and Open Data Plane (ODP) on an ARM-based platform running OpenStack. The DPDK and ODP middleware enables fast packet I/O for general purpose CPU platforms eliminating the typical bottleneck in the data path when there is no user space pass-through enablement. This middleware software is a must-have to enable an SDN solution, providing a unified interface to various platforms including x86 and ARM64 platforms.
High Compute Power at a Low Power Envelope
An SDN controller needs to have strong compute capability to handle large amounts of control traffic coming from many SDN switches – each individual flow needs handling by the central SDN controller. This brings concerns regarding the SDN controller in terms of performance and single point of failure.
There are different architectures proposed in the industry to mitigate the load on the central controller. One example is a distributed-centralized controller which has several SDN controllers, each managing a subsection of the network, with an additional control layer managing these regional controllers. This architecture requires smart, distributed and powerful compute capabilities throughout the entire network of SDN controllers. Different nodes, including SDN switch nodes, require different levels of performance and power. SDN implementations benefit from vendor platforms that offer a range of performance capabilities, matching the appropriate level of resources at the necessary point in the network design.
Security Enhancements
There is a growing need for security, and as the amount of control traffic increases, the needs of crypto acceleration or offload increase together. By offloading crypto operation to acceleration engines such as CCP (Crypto Co-processor) on a CPU or GPU, the system level performance can be maintained without compromising compute performance.
Deep Packet Inspection (DPI) - Understanting Network Traffic Flow
In order for an SDN controller to manage the network and associated policies, it requires a good ‘understanding’ of networking traffic. Centralized or distributed SDN architectures can support a deep understanding of traffic by collecting sets of packets from a traffic flow and analyzing them. There are two different ways to support this requirement.
• Option 1—Based on the assumption of having a big pipe/channel between SDN switches and SDN controller, all of the deep packet inspection or application recognition can be done in the central controller with a powerful DPI engine.
• Option 2—A small DPI engine can be implemented in the distributed SDN switches. These switches perform a basic deep packet inspection, then report the results or send only streams of important traffic. As we have seen, the latter case requires cheaper and simpler implementation to meet the basic SDN tenet.
Low cost and low power processors can be used for DPI applications. The combination of CPUs and GPUs as found in heterogeneous architectures, the latter being highly optimized for highly parallel programmable applications, provides a significant performance advantage.
I/O Integration
The main processor for SDN requires high speed I/O interfaces, for example, embedded network interfaces such as 1G, 10GE, and PCIe. This can lower system cost and ease system design complexity.
Software
Complicating the development of new SDN solutions is the continuing evolution of standards. Throughout the industry, there are different approaches to enabling network virtualization (for example, VXLAN and NVGRE), and these standards continue to evolve as they move to the next phases. In order to meet the requirements of these evolving standards – and any emerging network overlaying protocols – platforms must be able to provide flexibility and ease of programmability. As an example, the transition from the OpenFlow1.0 spec to the OpenFlow revision 1.3 significantly increased complexity as it aimed to support many types of networking functions and protocols.
Platform Needs
Modern heterogeneous compute platforms contain the following three major function blocks:
• General purpose, programmable scalar (CPU) and vector processing cores (GPU)
• High-performance bus
• Common, low-latency memory model
Leading heterogeneous designs are critical to maximizing throughput. For example, on AMD platforms incorporating Heterogeneous Systems Architecture (HSA), the CPU hands over only the pointers of the data blocks to the GPU. The GPU takes the pointers and processes the data block in the specific memory location and hands them back to the CPU. HSA ensures cache coherency between the CPU and the GPU. Figure 3 depicts an overview of this architecture.
GPUs are extremely efficient for parallel processing applications, and they can also be used for crypto operations, DPI, classification, compression and other applications. In the case of crypto operations, the CPU doesn’t have to get involved in the data plane crypto operation directly. With this architecture, the system level performance can be maintained even when the amount of traffic needing encryption or decryption increases. In a heterogeneous capable processor, software can selectively accelerate or offload CPU compute-intensive operations to the GPU. Here are a few additional functions that can be accelerated or offloaded to the GPU:
• DPI: Implement RegEx engine
• Security (such as IPSec) operations: RSA, crypto operation
• Compression operation for distributed storage applications
Figure 4 shows a number of different networking use cases and examples of where different levels of embedded processors integrate into the solution.
Conclusion
SDN introduces a new approach to network resource utilization and management, and each networking vendor in the market is looking for its own way to build SDN solutions. One key action that needs to be taken to enable SDN is to open up the intelligence of switches and routers to enable the abstraction of proprietary vendor technologies.
Mega data center players (Amazon, Google, Facebook and the like) are implementing technologies that will allow them to enable greater flexibility and lower costs. Amazon and Google are building their own networking (white box) switches so that they don’t have to rely on the platforms produced by OEM vendors. Facebook is driving the Open Compute Platform (OCP) to develop specifications for open architecture switches that will be manufactured by low-cost original design manufacturers (ODMs) . The open architecture approach from Facebook is creating an ecosystem where standard, high volume commodity platforms could be used to minimize CAPEX and OPEX costs.
SDN will drive the industry toward a more software-centric architecture and implementation. Thus, in this environment, OEMs find it more difficult to provide platform differentiators. With SDN, the need for less expensive and easy-to-access hardware becomes paramount, and platform-specific, value-added services is deprioritized.
About the Author
Seong Kim is currently a system architect in AMD’s Embedded Networking Division. He has more than 15 years of experience in networking systems architecture and technical marketing. His recent initiatives include NFV, SDN, Server virtualization, wireless communication networking, and security and threat management solutions. Dr. Kim’s work has been published in numerous publications including IEEE communications and Elsevier magazines, and has presented at several industry conferences and webinars. He has several US patents and US patents pending in the field of networking. Kim holds a Ph.D. in Electrical Engineering from State University of New York (SBU) and an M.B.A degree from Lehigh University.
The networking and communications industry is at a critical inflection point as it looks to embrace new technologies such as software-defined networking (SDN) and network function virtualization (NFV). While there are significant advantages to deploying a software-defined network, there are challenges as well. The implementation of SDN and NFV requires revamping network components and structures, and adopting new approaches to writing software for network management function.
The hosting of SDN and NFV middleware and network management software on industry-standard processors is now being handled by modern multi-processor heterogeneous system architectures that incorporate both CPU and GPU resources within a single SOC.
What’s been missing until recently is a holistic view of networks and the technology providing a standardized separation of the control and data planes. SDN provides this capability, and can efficiently enable data center and service providers to manage network configuration, management, routing and policy enforcement for their evolving multi-tenant heterogeneous networks.
As defined by the Open Networking Foundation, SDN decouples the network control and forwarding functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services.
Unlike server virtualization, which enables sharing of a single physical resource by many users or entities, virtualizing network resources enables a consolidation of different physical resources by overlaying virtual layers of networks on heterogeneous networks, resulting in a unified, logically homogenous network. Figure 1 describes three requirements that commonly define SDN architecture.
SDN Trends and Challenges
There are several different SDN deployment scenarios in the industry, although the original SDN concept proposes to have a centralized control plane with only the data plane remaining in the network.
On the controller implementation, three basic topologies are being considered in the industry. The first is a centralized topology where one SDN controller controls all the switches in the network. This approach, however, incurs a higher risk of failure since it makes the central controller a single point of failure for the network. The second topology being investigated is the so-called distributed-centralized architecture. In this approach multiple “regional” SDN controllers, each controlling a subset of the network, communicate with the (global) central controller. This architecture eliminates single points of failure since one controller can take over the function of a failed controller. Finally, Orion proposes a hierarchical topology that may provide better network scalability.
Apart from the controller, the data plane can also become a challenge with the transition to SDN, because traditional switching and/or forwarding devices/ASICs will not be able to easily support SDN traffic due to evolving standards. Hence the need to have a hybrid approach. Specifically, a portion of the network (e.g., the access network) can be SDN enabled while the other portion (e.g., the core network) can remain as a ‘traditional’ network . Thus traditional platforms are located in the intermediate nodes, acting as a big pipe, and SDN-enabled platforms serve as the switch and routing platforms. With this approach, an SDN network may be enabled immediately without requiring the overhaul of the entire network.
Challenges in SDN are still emerging, as the definition of SDN continues to evolve. The scale-out network paradigm is evolving as well. Due to these uncertainties, abstraction mechanisms from different vendors will compete or co-exist. In addition, creation of SDN controllers and switches requires resolution of design challenges in many hardware platforms.
The data center environment is the most common use case for SDN. In the traditional data center network, there are ToR (Top of Rack), EoR (End of Row), aggregation and core switches. Multi-tier networking is a common configuration. To increase data center network manageability, SDN can abstract physical elements and represent them as logical elements using software. It treats all network elements as one large resource across multiple network segments. Therefore it can provide complete visibility of the network and manage policies across network nodes connected to virtual and physical switches.
Figure 2 shows a traditional multi-tier data center network and how an SDN controller can manage the entire network from a centralized location.
SDN’s basic tenet is to remove vendor-specific dependencies, reduce complexity and improve control, allowing the network to quickly adapt to changes in business needs. Other key SDN requirements are the disaggregation of control and data planes, and the integration of strong compute and packet processing capabilities. Companies are now collaborating to demonstrate the feasibility of a complete SDN solution utilizing the unique compute capabilities and power efficiency of heterogeneous, general purpose processors.
Software Enablement for SDN
One such demonstration of the integration needed to enable SDN is an ETSI NFV proof-of-concept. In this proof of concept, several companies demonstrated the integration of a Data Plane Development Kit (DPDK) on an x86 platform and Open Data Plane (ODP) on an ARM-based platform running OpenStack. The DPDK and ODP middleware enables fast packet I/O for general purpose CPU platforms eliminating the typical bottleneck in the data path when there is no user space pass-through enablement. This middleware software is a must-have to enable an SDN solution, providing a unified interface to various platforms including x86 and ARM64 platforms.
High Compute Power at a Low Power Envelope
An SDN controller needs to have strong compute capability to handle large amounts of control traffic coming from many SDN switches – each individual flow needs handling by the central SDN controller. This brings concerns regarding the SDN controller in terms of performance and single point of failure.
There are different architectures proposed in the industry to mitigate the load on the central controller. One example is a distributed-centralized controller which has several SDN controllers, each managing a subsection of the network, with an additional control layer managing these regional controllers. This architecture requires smart, distributed and powerful compute capabilities throughout the entire network of SDN controllers. Different nodes, including SDN switch nodes, require different levels of performance and power. SDN implementations benefit from vendor platforms that offer a range of performance capabilities, matching the appropriate level of resources at the necessary point in the network design.
Security Enhancements
There is a growing need for security, and as the amount of control traffic increases, the needs of crypto acceleration or offload increase together. By offloading crypto operation to acceleration engines such as CCP (Crypto Co-processor) on a CPU or GPU, the system level performance can be maintained without compromising compute performance.
Deep Packet Inspection (DPI) - Understanting Network Traffic Flow
In order for an SDN controller to manage the network and associated policies, it requires a good ‘understanding’ of networking traffic. Centralized or distributed SDN architectures can support a deep understanding of traffic by collecting sets of packets from a traffic flow and analyzing them. There are two different ways to support this requirement.
• Option 1—Based on the assumption of having a big pipe/channel between SDN switches and SDN controller, all of the deep packet inspection or application recognition can be done in the central controller with a powerful DPI engine.
• Option 2—A small DPI engine can be implemented in the distributed SDN switches. These switches perform a basic deep packet inspection, then report the results or send only streams of important traffic. As we have seen, the latter case requires cheaper and simpler implementation to meet the basic SDN tenet.
Low cost and low power processors can be used for DPI applications. The combination of CPUs and GPUs as found in heterogeneous architectures, the latter being highly optimized for highly parallel programmable applications, provides a significant performance advantage.
I/O Integration
The main processor for SDN requires high speed I/O interfaces, for example, embedded network interfaces such as 1G, 10GE, and PCIe. This can lower system cost and ease system design complexity.
Software
Complicating the development of new SDN solutions is the continuing evolution of standards. Throughout the industry, there are different approaches to enabling network virtualization (for example, VXLAN and NVGRE), and these standards continue to evolve as they move to the next phases. In order to meet the requirements of these evolving standards – and any emerging network overlaying protocols – platforms must be able to provide flexibility and ease of programmability. As an example, the transition from the OpenFlow1.0 spec to the OpenFlow revision 1.3 significantly increased complexity as it aimed to support many types of networking functions and protocols.
Platform Needs
Modern heterogeneous compute platforms contain the following three major function blocks:
• General purpose, programmable scalar (CPU) and vector processing cores (GPU)
• High-performance bus
• Common, low-latency memory model
Leading heterogeneous designs are critical to maximizing throughput. For example, on AMD platforms incorporating Heterogeneous Systems Architecture (HSA), the CPU hands over only the pointers of the data blocks to the GPU. The GPU takes the pointers and processes the data block in the specific memory location and hands them back to the CPU. HSA ensures cache coherency between the CPU and the GPU. Figure 3 depicts an overview of this architecture.
GPUs are extremely efficient for parallel processing applications, and they can also be used for crypto operations, DPI, classification, compression and other applications. In the case of crypto operations, the CPU doesn’t have to get involved in the data plane crypto operation directly. With this architecture, the system level performance can be maintained even when the amount of traffic needing encryption or decryption increases. In a heterogeneous capable processor, software can selectively accelerate or offload CPU compute-intensive operations to the GPU. Here are a few additional functions that can be accelerated or offloaded to the GPU:
• DPI: Implement RegEx engine
• Security (such as IPSec) operations: RSA, crypto operation
• Compression operation for distributed storage applications
Figure 4 shows a number of different networking use cases and examples of where different levels of embedded processors integrate into the solution.
Conclusion
SDN introduces a new approach to network resource utilization and management, and each networking vendor in the market is looking for its own way to build SDN solutions. One key action that needs to be taken to enable SDN is to open up the intelligence of switches and routers to enable the abstraction of proprietary vendor technologies.
Mega data center players (Amazon, Google, Facebook and the like) are implementing technologies that will allow them to enable greater flexibility and lower costs. Amazon and Google are building their own networking (white box) switches so that they don’t have to rely on the platforms produced by OEM vendors. Facebook is driving the Open Compute Platform (OCP) to develop specifications for open architecture switches that will be manufactured by low-cost original design manufacturers (ODMs) . The open architecture approach from Facebook is creating an ecosystem where standard, high volume commodity platforms could be used to minimize CAPEX and OPEX costs.
SDN will drive the industry toward a more software-centric architecture and implementation. Thus, in this environment, OEMs find it more difficult to provide platform differentiators. With SDN, the need for less expensive and easy-to-access hardware becomes paramount, and platform-specific, value-added services is deprioritized.
About the Author
Seong Kim is currently a system architect in AMD’s Embedded Networking Division. He has more than 15 years of experience in networking systems architecture and technical marketing. His recent initiatives include NFV, SDN, Server virtualization, wireless communication networking, and security and threat management solutions. Dr. Kim’s work has been published in numerous publications including IEEE communications and Elsevier magazines, and has presented at several industry conferences and webinars. He has several US patents and US patents pending in the field of networking. Kim holds a Ph.D. in Electrical Engineering from State University of New York (SBU) and an M.B.A degree from Lehigh University.
Got an idea for a Blueprint column? We welcome your ideas on next gen network architecture.
See our guidelines.
See our guidelines.