AT&T has deployed its open disaggregated core routing platform on its 400G transport network. The router leverages technology from Broadcom, DriveNets, and UfiSpace.
The white box hardware, which was designed and manufactured by UfiSpace, is based on Broadcom’s Jericho2 switching silicon and Distributed, Dis-aggregated Chassis (DDC) design. It consists of three components: a 40x100G line card system, 10x400G line card system, and a 48x400G fabric system. These building blocks can be deployed in various configurations to build routers with capacity anywhere between 4 Tbps to 192 Tbps.
DriveNets Network Cloud solution and its Network Operating System (NOS) software provides the management and control of the white box hardware. This enables MPLS transport across AT&T's global, multi-service core backbone. The software then connects into AT&T’s centralized SDN controller that optimizes the routing of traffic across the core.
AT&T notes that the deployment of this dis-aggregated core routing platform is coupled with the deployment of the company’s next gen long haul 400G optical transport platform.
“I’m proud to announce today that we have now deployed a next gen IP/MPLS core routing platform into our production network based on the open hardware designs we submitted to OCP last fall,” said Andre Fuetsch, AT&T’s CTO of Network Services, in his keynote speech at the Open Networking and Edge Summit (ONES). “We chose DriveNets, a disruptive supplier, to provide the Network Operating System (NOS) software for this core use case.”
“We are thrilled about this opportunity to work with AT&T on the development of their next gen, software-based core network,” said Ido Susan, CEO of DriveNets. “AT&T has a rigorous certification process that challenged my engineers to their limits, and we are delighted to take the project to the next level with deployment into the production network.”
AT&T contributes Distributed Disaggregated Chassis white box to OCP
AT&T has contributed its specifications for a Distributed Disaggregated Chassis (DDC) white box architecture to the Open Compute Project (OCP). The contributed design aims to define a standard set of configurable building blocks to construct service provider-class routers, ranging from single line card systems, a.k.a. “pizza boxes,” to large, disaggregated chassis clusters. AT&T said it plans to apply the design to the provider edge (PE) and core routers that comprise its global IP Common Backbone (CBB).
“The release of our DDC specifications to the OCP takes our white box strategy to the next level,” said Chris Rice, SVP of Network Infrastructure and Cloud at AT&T. “We’re entering an era where 100G simply can’t handle all of the new demands on our network. Designing a class of routers that can operate at 400G is critical to supporting the massive bandwidth demands that will come with 5G and fiber-based broadband services. We’re confident these specifications will set an industry standard for DDC white box architecture that other service providers will adopt and embrace.”
AT&T’s DDC white box design, which is based on Broadcom’s Jericho2 chipset, calls for three key building blocks:
- A line card system that supports 40 x 100G client ports, plus 13 400G fabric-facing ports.
- A line card system that support 10 x 400G client ports, plus 13 400G fabric-facing ports.
- A fabric system that supports 48 x 400G ports. A smaller, 24 x 400G fabric systems is also included.
AT&T points out that the line cards and fabric cards are implemented as stand-alone white boxes, each with their own power supplies, fans and controllers, and the backplane connectivity is replaced with external cabling. This approach enables massive horizontal scale-out as the system capacity is no longer limited by the physical dimensions of the chassis or the electrical conductance of the backplane. Cooling is significantly simplified as the components can be physically distributed if required. The strict manufacturing tolerances needed to build the modular chassis and the possibility of bent pins on the backplane are completely avoided.
Four typical DDC configurations include:
- A single line card system that supports 4 terabytes per second (Tbps) of capacity.
- A small cluster that consists of 1 plus 1 (added reliability) fabric systems and up to 4 line card systems. This configuration would support 16 Tbps of capacity.
- A medium cluster that consists of 7 fabric systems and up to 24 line card systems. This configuration supports 96 Tbps of capacity.
- A large cluster that consists of 13 fabric systems and up to 48 line card systems. This configuration supports 192 Tbps of capacity.
- The links between the line card systems and the fabric systems operate at 400G and use a cell-based protocol that distributes packets across many links. The design inherently supports redundancy in the event fabric links fail.
DriveNet scales its disaggregated router to 400G
DriveNets, a start-up based in Israel, announced 400G-port routing support to its Network Cloud software-based disaggregated router.
The company says its Network Cloud is the only router on the market designed to scale 100/400G ports up to performance of 768 Tbps. Inspired by the hyperscalers, Network Cloud runs the routing data plane on cost-efficient white-boxes and the control plane on standard servers, disconnecting network cost from capacity growth.
DriveNets’ latest routing software release supports a packet-forwarding white-box based on Broadcom’s Jericho2 chipset which has high-speed, high-density port interfaces of 100G and 400G.
The platform is now being tested and certified by a tier-1 Telco customer.
DriveNets was founded in 2015 by Ido Susan and Hillel Kobrinsky. Susan previously co-founded Intucell, which was acquired by Cisco for $475 million. Kobrinsky founded the web conferencing specialist, Interwise, which was acquired by AT&T for $121 million.
In February, the company emerged from stealth with $110 Million in Series A funding.