Thursday, March 14, 2024

Broadcom shows 200-Gbps per lane optical interconnects

Ahead of this month's #OFC24 in San Diego, Broadcom is announcing an expanded portfolio of optical interconnect solutions, including:

  • Production release of 200-Gbps per lane (200G/lane) electro-absorption modulated laser (EML) to pair with next generation GPUs
  • Demonstration of the industry’s first 200G/lane vertical-cavity surface-emitting laser (VCSEL) 
  • Demonstration of continuous wave (CW) laser with high efficiency and high linearity for silicon photonics (SiPh) modulation at 200G
  • Shipment of more than 20 million channels of 100G/lane high speed optical components used in AI/ML systems

The demand for AI clusters is expected to drive rapid adoption of 200G/lane optics in 800G and 1.6T transceivers. Broadcom’s 200G VCSEL and EML products follow up on successful deployment of 100G/lane VCSEL and EML chips into first-generation generative AI networks.

At OFC24, Broadcom will showcase the following products and technologies:

  • 200G VCSEL technology demonstration
  • 200G EML product demonstration
  • 200G SiPh modulation with CW lasers
  • 100G VCSELs for emerging applications including server interconnect, PCIe interconnect and next generation Fibre Channel

“Generative AI has unleashed a network transformation necessitating an order of magnitude increase in high-speed optical links compared to standard network requirements,” said Near Margalit, Ph. D., vice president and general manager of the Optical Systems Division at Broadcom. “We will continue to invest in VCSEL, EML and CW laser technologies to deliver disruptive innovation in bandwidth, power and latency for optical interconnects in next generation AI links.”

https://www.broadcom.com/company/news/product-releases/61941

#OFC24 preview: Ushering in Terabit Transceivers

What's next for optical interconnects needed for next-gen AI clusters? Manish Mehta, VP Marketing from Broadcom's Optical Systems Division, provides an insightful look into the advancements in this field:

- Manish discusses the progress in optical components, particularly the development of the terabit transceiver. He highlights Broadcom's ongoing production of 100G vertical-cavity surface-emitting laser (VCSEL) and 100G electro-absorption modulated lasers (EMLs), and the upcoming showcase of 200G EMLs and 200 VCSELs that can be used in 1.6 terabit transceivers. This is a significant step forward for the next generation of AI networks.

- He also delves into the company's investment in co-packaged optics, revealing the next phase of their 51.2 T co-packaged Ethernet switch product.

- Lastly, Manish touches on the ongoing debates and efforts in the industry to reduce the power of the optical interconnect, a goal targeted by both LPO and CPO. This is a crucial aspect of the industry's evolution and one that Broadcom is keenly focused on.


https://youtu.be/V7ghZj72sSE

Want to be involved our video series? Going to OFC? Contact info@nextgeninfra.io

Broadcom ships its 51.2 Gbps CPO switch

Broadcom confirmed commercial deliveries of Bailly, the industry’s first 51.2 Tbps co-packaged optics (CPO) Ethernet switch. The product integrates eight silicon photonics based 6.4-Tbps optical engines with Broadcom’s StrataXGS Tomahawk 5 switch chip. 

Compared to conventiOnal switches with pluggable transceivers, Bailly's CPO interconnect promises 70% lower power consumption and delivers an 8x improvement in silicon area efficiencY. 

Bailly integrates hundreds of optical components and hundreds of millions of transistors in a single optical engine. The high degree of integration enables the placement of the optical engines on a common substrate with complex logic ASICs minimizing the need for signal conditioning circuitry.

51.2-Tbps CPO Switch Product Highlights:

  • Broadcom 51.2-Tbps StrataXGS Tomahawk 5 switch silicon
  • Broadcom 6.4T-FR4 Bailly SCIP optical engines with Broadcom Fiber Connector (BFC) for CPO systems
  • 4RU system design with high-efficiency air cooling to deliver 128 ports of 400G FR4 connectivity externally fiber-coupled with 128 duplex LC optical connectors
  • CPO engine to front-panel routing supports traditional single-mode fiber 
  • System design compatible to support multiple remote laser modules (RLM) for field replaceability
  • More than 70% optical interconnect power consumption savings compared to standard pluggable optics solutions

Broadcom notes that conventional pluggable optical transceivers consume approximately 50% of system power and constitute more than 50% of the cost of a traditional switch system. Bailly's high degree of integration provides the lowest latency, highest bandwidth density, lowest power, and lowest cost solution for building large-scale, power-efficient AI clusters.

“As AI clusters demand higher bandwidth density, lower power consumption and lower latency, we are pleased to announce delivery of the industry’s first 51.2-Tbps CPO switch,” said Near Margalit, Ph. D., vice president and general manager of the Optical Systems Division at Broadcom. “Bailly will enable hyperscalers to deploy lower-power, cost-efficient, large-scale AI and compute clusters. Broadcom’s technology leadership and manufacturing innovations help Bailly deliver 70% better power efficiency and ensure an optical I/O roadmap that can walk in tandem with the future bandwidth and power needs of AI infrastructure.”



https://www.broadcom.com/company/news/product-releases/61946


https://youtu.be/KsiHanEPkT8

Want to be involved our video series? Contact info@nextgeninfra.io


https://youtu.be/E4gE8XEoGrQ

Filmed at OCP Summit 2023 (#OCPSummit23) in San Jose.


Infinera intros InP-based terabit optics

At #OFC24, Infinera is introducing a new line of high-speed intra-data center optics based on monolithic indium phosphide (InP) photonic integrated circuit (PIC) technology. 

The new ICE-D optics are designed to dramatically lower cost and power per bit while providing intra-data center connectivity at speeds of 1.6 terabits per second (Tb/s) and greater. Infinera’s flexible ICE-D intra-data center optics are designed to support integration into a variety of different intra- and campus data center optical solutions. These solutions include digital signal processor (DSP)-based retimed optics, linear-drive pluggable optics (LPO), and co-packaged optics (CPO) for both serial and parallel fiber applications and distances ranging from 100 meters up to 10 kilometers. ICE-D test chips are currently available and have demonstrated a reduction in power per bit by as much as 75 percent while simultaneously increasing connectivity speed.

Infinera said the new InP optics will leverage the unique capabilities of its U.S.-based optical semiconductor fab. Infinera’s intra-data center optical connectivity technology enables highly integrated solutions that combine multiple optical functions onto a single monolithic chip, resulting in industry-leading density, low latency, and power efficiency. 

“Infinera is excited to apply our 20+ years of pioneering innovation in optical connectivity solutions to solve the challenge of economically scaling intra-data center connectivity to support the deluge of bandwidth stemming from AI applications,” said Ron Johnson, SVP and General Manager of Optical Systems and Global Engineering at Infinera. “Our unique monolithic InP PIC technology puts us in an ideal position to develop innovative technologies to provide cost- and power-efficient, high-capacity intra-data center connectivity solutions.”

#OFC24 preview: Optical Innovation for the Terabit Age

How will optical transport keep up with the relentless pace of bandwidth growth in the AI era?  Robert Shore, SVP, Marketing from Infinera highlights innovations at #OFC24:

- The enhancements made to Infinera's GX platform, designed to support all applications across the network, from Metro Edge to subsea. This platform can scale to meet growing bandwidth demands and easily integrate new technologies as they become available.

- The introduction of new aggregation cards, the recently released ICE 7 1.2 Terabit Optical engine, and new Optical line system functionality on the GX. These innovations can increase capacity per fiber by nearly 40%, reducing cost, power per bit, and overall total cost of ownership.

- The launch of a suite of intelligent coherent pluggable Optics (ICE-X) and new applications for the Open Wave suite of network automation solutions. These are designed to make multi-vendor and multi-layer networks easier to manage, enabling more efficient network design.


https://youtu.be/zMp9ndecSyU

Join us at this year's OFC in San Diego.

Want to be involved our video series? Contact info@nextgeninfra.io

Australia’s NBN picks Infinera’s GX and ICE-X coherent pluggables

NBN Co (nbn), Australia’s wholesale open-access broadband provider, has selected Infinera’s GX Series platform and ICE-X intelligent coherent pluggables to help upgrade its nationwide optical network.

The Infinera GX and ICE-X solution, which will be deployed across nbn’s 60,000-kilometer fiber optic transport network, will increase traffic aggregation efficiencies and overall network capacity, while enabling significant power and cost per bit savings.

“The need for broadband in Australia is expected to reach levels never seen before over the next decade, and we continue to upgrade and modernise our national network to stay ahead of the evolving needs of our customers,” said Andrew Leong, nbn Chief Technology Officer – Fixed Networks. “By enhancing the performance of our fibre optic backbone by using Infinera’s GX platform and ICE-X pluggable optics, we expect to be able to cost-efficiently scale capacity while delivering a great customer experience.”

https://www.infinera.com/press-release/nbn-selects-infinera-gx-and-ice-x-to-upgrade-national-broadband-network/

Arrcus runs SRv6 Mobile User Plane on NVIDIA's Bluefield DPUs

Arrcus is demonstrating a secure 5G network slice using SRv6 Mobile User Plane (MUP), led by SoftBank Corp, and implemented on the NVIDIA BlueField-3 DPU.

The proof-of-concept will be shown at the NVIDIA GTC global AI developer conference, scheduled from March 18th to 21st in San Jose, California. The demonstrations leverage BlueField DPUs and Arrcus' ArcOS network operating system (NOS) solution, aiming to deliver end-to-end network slicing and offer simplicity, high performance, and a lower total cost of ownership.

Network slicing, facilitated by Arrcus' SRv6 MUP implementation, offers automated end-to-end segmentation of network resources, enabling the delivery of advanced applications and services. Additionally, security is paramount in the delivery of network slices, particularly for tasks like secure data transportation for enterprise AI inference and training. IPSec adds end-to-end encryption and authentication to secure the traffic on the network slices.

One of the critical challenges in delivering network slicing with IPSec encryption lies in the attainable network throughput, as it is constrained by the limitations of standard compute processors. BlueField DPUs offload the compute-intensive encrypted data path from CPUs, accelerating it with dedicated hardware. This frees up CPU resources for critical applications while ensuring high-performance data delivery.

"We are thrilled to work with NVIDIA and SoftBank to accelerate the deployment of secure 5G network slices," said Shekar Ayyar, Chairman, and CEO of Arrcus. "SRv6 MUP technology, developed in collaboration with SoftBank, combined with NVIDIA BlueField DPUs, offers a powerful solution that helps address the evolving needs of modern networks, providing advanced security and efficiency while unlocking new revenue opportunities for service providers."

"NVIDIA BlueField DPUs are purpose-built to offload, accelerate, and isolate data center workloads, making them ideal for secure and power efficient 5G network slices,” said Ronnie Vasishta, Senior Vice President of Telecom at NVIDIA. "The integration of BlueField DPUs with Arrcus’ SRv6 technology will help make the deployment of a secure 5G network possible."

"SoftBank is committed to delivering the most advanced and secure 5G network to our customers," said Keiichi Makizono, Executive Vice President & Chief Information Officer (CIO) at SoftBank Corp. "These demonstrations of SRv6 MUP using Arrcus’ ArcOS NOS running on NVIDIA BlueField DPUs are a major step forward in achieving this goal."


https://youtu.be/4sbjBVrRlLA

https://arrcus.com/news/arrcus-to-demonstrate-secure-5g-networking-on-nvidia-bluefield-3-dpus/

CANARIE, ESnet, GÉANT, and Internet2 expand transatlantic bandwidth

Internet2, CANARIE, the Energy Sciences Network (ESnet), and GÉANT, have added three 400 Gbps circuits to boost transoceanic capacity for data-intensive science.

The Advanced North Atlantic (ANA) collaboration supports multinational, data-intensive science collaborations, including the Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator, and the Square Kilometer Array (SKA), the ongoing effort to build the world's largest radio astronomy observatory.

The joint effort adds three 400 Gbps spectrum circuits between exchange points in the U.S., U.K., and France using the new  Amitié subsea cable system, which was completed in July 2023 and spans 6,783 kilometers (4,215 miles).

With the addition of the new circuits, the combined capacity of the ANA collaboration’s trans-Atlantic network is now 2.4 Tbps

Key points:

  • ESnet will operate two of the 400 Gbps transoceanic circuits in support of U.S. Department of Energy national laboratories, supercomputing facilities, major scientific instruments, and global collaborators, bringing its total 400 Gbps transoceanic circuits to three. 
  • Internet2 will operate one circuit for members of the U.S. R&E community, as well as its North American partner CANARIE.
  •  Internet2 recently added a new 400 Gbps exchange point in Boston. 
  • Internet2 also augmented two existing exchange points with 400 Gbps switching capacity: the Manhattan Landing (MAN-LAN) in New York, NY, and the Washington International Exchange (WIX) in McLean, VA.
  • From the endpoints of the Amitié cable systems in the U.K. and France, GÉANT is providing the connectivity to deliver the trans-Atlantic traffic to the London, Geneva, and Paris points of presence on its pan-European network backbone. 
  •  GÉANT is also planning to further reinforce the ANA collaboration's capacity by upgrading its trans-Atlantic connectivity via the GN5-IC1 project.

“We are thrilled to be part of this momentous undertaking alongside our partners Internet2, ESnet, and GÉANT,” said Mark Wolff, chief technology officer at CANARIE. “This advancement in trans-Atlantic high-speed connectivity will enable researchers and students in Canada to contribute to and benefit from global scientific discoveries and is truly a testament to the collaborative ethos of the global research and education networking community.”

“ESnet is excited to be working with the R&E networking community to fulfill our goal of building the necessary bandwidth to support the expanding data-intensive needs of global scientific research collaborations, such as the high-luminosity upgrade of the LHC,” said Jon-Paul Herron, head of ESnet Network Services.

https://www.es.net/news-and-publications/esnet-news/2024/ana-transatlantic-400g-circuits/

Meter rolls out new switches with digital twin capabilities

 Meter, a start-up offering Network as a Service (NaaS) delivered as a fullstack solution, is rolling out brand new switch hardware, firmware, and virtualization features, all built from the ground up to work seamlessly with the rest of the Meter Network Operating System (NOS).

The new switches, which are available in 24-port and 48-port models, offer a digital twin capability for better planning, visibility, configurability, and reliability.

https://www.meter.com/product-newsletter/meter-switch-platform

Meter attracts Silicon Valley investors for its NaaS

Meter, a start-up based in San Francisco, announced a $35 million round of funding, for its Network as a Service (NaaS).Meter's Network as a Service offering is delivered as a full stack solution, including its own hardware, software, and operations. Using its cloud-managed dashboard, which serves as a digital counterpart for routing, switching, wireless, security, DNS security, VPN, and SD-WAN, customers can easily schedule maintenance, adjust configurations...


FCC increases broadband speed benchmark to 100/20 Mbps

 The FCC is increasing the minimum benchmark speed to classify as "high-speed fixed broadband" to download speeds of 100 Mbps and upload speeds of 20 Mbps – a four-fold increase from the 25/3 Mbps benchmark set by the Commission in 2015. A newly-issued FCC report also sets a 1 Gbps/500 Mbps long-term goal for broadband speeds.

The increase in the Commission’s fixed speed benchmark for advanced telecommunications capability is based on the standards now used in multiple federal and state programs (such as NTIA’s BEAD Program and multiple USF programs), consumer usage patterns, and what is actually available from and marketed by internet service providers.

The Report concludes that advanced telecommunications capability is not being deployed in a reasonable and timely fashion based on the total number of Americans, Americans in rural areas, and people living on Tribal lands who lack access to such capability, and the fact that these gaps in deployment are not closing rapidly enough.

Using the agency’s Broadband Data Collection deployment data for the first time rather than FCC Form 477 data, the Report shows that, as of December 2022:

  • Fixed terrestrial broadband service (excluding satellite) has not been physically deployed to approximately 24 million Americans, including almost 28% of Americans in rural areas, and more than 23% of people living on Tribal lands;
  • Mobile 5G-NR coverage has not been physically deployed at minimum speeds of 35/3 Mbps to roughly 9% of all Americans, to almost 36% of Americans in rural areas, and to more than 20% of people living on Tribal lands;
  • 45 million Americans lack access to both 100/20 Mbps fixed service and 35/3 Mbps mobile 5G-NR service; and
  • Based on the new 1 Gbps per 1,000 students and staff short-term benchmark for schools and classrooms, 74% of school districts meet this goal.

https://docs.fcc.gov/public/attachments/DOC-401205A1.pdf