Wednesday, April 19, 2023

Marvell advances its high-speed interconnects on TSMC's 3nm node

Marvell demonstrated high-speed, ultra-high bandwidth silicon interconnects produced on TSMC's 3-nanometer (3nm) process. This includes Marvell's 112G XSR SerDes (serializer/de-serializer), Long Reach SerDes, PCIe Gen 6 / CXL 3.0 SerDes, and a 240 Tbps parallel die-to-die interconnect.

SerDes and parallel interconnects serve as high-speed pathways for exchanging data between chips or silicon components inside chiplets. Together with 2.5D and 3D packaging, these technologies will eliminate system-level bottlenecks to advance the most complex semiconductor designs. SerDes also help reduce pins, traces and circuit board space to reduce cost. A rack in a hyperscale data center might contain tens of thousands of SerDes links.

The new parallel die-to-die interconnect, for example, enables aggregate data transfers up to 240 Tbps, 45% faster than available alternatives for multichip packaging applications. 

Marvell incorporates its SerDes and interconnect technologies into its flagship silicon solutions including Teralynx switches, PAM4 and coherent DSPs, Alaska Ethernet physical layer (PHY) devices, OCTEON processors, Bravera storage controllers, Brightlane™ automotive Ethernet chipsets, and custom ASICs. Moving to a 3nm process enables engineers to lower the cost and power consumption of chips and computing systems while maintaining signal integrity and performance.

"Interconnects are taking on heightened importance as clouds and other computing systems grow in size, complexity and capability. Our advanced SerDes and parallel interfaces will play a significant role in providing a platform for developing chips with best-in-class bandwidth, latency, bit error rate, and power efficiency for meeting the demands of AI and other complex workloads," said Raghib Hussain, president of products and technologies at Marvell. "We are proud to be able to deliver such advances on TSMC's 3nm technology and take semiconductor designs to the next level for our customers around the world."


  • Marvell was also the first data infrastructure silicon supplier to respectively sample and commercially release the 112G SerDes and has been a leader in data infrastructure products based on TSMC's 5nm process.

https://www.marvell.com/company/newsroom/marvell-demonstrates-industrys-first-3nm-data-infrastructure-silicon.html

New Middle Mile: Hyperscaler and cloud trends drive middle mile enhancements

In this video, Jason Taylor, VP of Large Enterprise Sales of Zayo explains why they are positioned for fiber success in 2023. The following are Jason's key talking points:

- Zayo has 16 million fiber miles and is connected to 44,000 buildings and 1,400 data centers, making them prepared to execute in the fiber space.

- Zayo is upgrading their fiber to 400Gbps and will likely need to upgrade to 800Gbps in the near future to keep up with the growing demand for usage.

- Zayo partners with multiple cloud providers and  launched an API developer portal in 2022 to help customers explore, onboard, and test their existing APIs, offering design best software integration strategies, network discovery, quote and order capability, and service management.

Check out the rest of the New Middle Mile (#nmm2023) video showcase here: https://ngi.fyi/nmm23yt

Want to be in one of NextGenInfra.io videos? Contact us at info@nextgeninfra.io




SK hynix samples 12-layer HBM3 chip with 24 GB capacity

SK hynix announced sampling of the first 12-layer HBM3 product with a 24 GB memory capacity -- the largest in the industry to date. 

HBM (High Bandwidth Memory) vertically interconnects multiple DRAM chips and dramatically increases data processing speed in comparison to traditional DRAM products. HBM3 is the 4th generation product, succeeding the previous generations HBM, HBM2 and HBM2E.

"The company succeeded in developing the 24GB package product that increased the memory capacity by 50% from the previous product, following the mass production of the world's first HBM3 in June last year," SK hynix said. "We will be able to supply the new products to the market from the second half of the year, in line with growing demand for premium memory products driven by the AI-powered chatbot industry."

SK hynix said the gains in process efficiency and performance stability were achieved by applying Advanced Mass Reflow Molded Underfill (MR-MUF) technology to the latest product, while Through Silicon Via (TSV) technology reduced the thickness of a single DRAM chip by 40%, achieving the same stack height level as the 16GB product.

OCP Switch Abstraction Interface moves under LF's DENT Project

The Linux Foundation (LF) announced the incorporation of the Open Compute Switch Abstraction Interface (SAI) into the open source DENT Network Operating System (NOS) project.

The DENT project, a Linux-based network operating system (NOS), has been designed to empower disaggregated networking solutions for enterprises and data centers. 

The Linux Foundation said that by incorporating OCP's SAI, an open-source Hardware Abstraction Layer (HAL) for network switches, DENT has taken a significant step forward in enabling seamless support for a wide range of Ethernet switch ASICs, thereby expanding its compatibility and fostering greater innovation in the networking space.

The move is driven by the need to widen standardized interfaces for programming network switch ASICs, enabling hardware vendors to develop and maintain their device drivers independently from the Linux kernel. SAI offers several advantages:

  • Hardware Abstraction: SAI provides a hardware-agnostic API, enabling developers to work on a consistent interface across different switch ASICs, thus reducing development time and effort.
  • Vendor Independence: By separating the switch ASIC drivers from the Linux kernel, SAI enables hardware vendors to maintain their drivers independently, ensuring timely updates and support for the latest hardware features.
  • Ecosystem Support: SAI is backed by a thriving community of developers and vendors, ensuring continuous improvements and ongoing support for new features and hardware platforms.

Lumen launches 400G IP transit ports across its Tier 1 Internet backbone

Lumen launched 400G IP transit ports across its Tier 1 Internet backbone network in the U.S. and EMEA. The service is targetted at businesses, hyperscalers and content providers needing ultra-high bandwidth connections to efficiently support massive IP transit traffic. The service is now available in 8 markets and will be expanded across the footprint throughout the year.

Lumen's AS3356 network is the number one peered network in the world based on data from CAIDA.org. Traffic on this network grew 38% year-over-year (YoY) in 2021 and 16% YoY in 2022.

"We continue to capitalize on our network strength and deliver the services that are so important to our customers," said Andrew Dugan, Lumen Chief Technology Officer. "Businesses that need IP transit are looking for efficient global internet routes connecting where data is and where it needs to go. Lumen's highly peered AS3356 network can connect internet traffic sources and destinations with minimal network hops. Combining Lumen's 400G transit ports with our 400G wavelengths well positions us in the IP transit market for delivering ultra-high bandwidth connections."

https://www.lumen.com/en-us/home.html

F5's quarterly revenue of 11% yoy, cites macro economics for 9% job cuts

F5 reported revenue of $703 million for its second quarter of fiscal year 2023 revenue, up 11% from $634 million in fiscal year 2022. Global services revenue grew 8% from the year-ago period while product revenue grew 14%, reflecting 43% systems revenue growth and software revenue that was down 13% from the year-ago period.

GAAP net income for the second quarter of fiscal year 2023 was $81 million, or $1.34 per diluted share compared to $56 million, or $0.92 per diluted share, in the second quarter of fiscal year 2022.

“We delivered 11% revenue growth in our second quarter as a result of stronger than expected systems shipments and strong global services performance,” said François Locoh-Donou, F5’s President and CEO. “While customer spending remains pressured by macro-economic uncertainty near term, we are differentiated in our ability to help customers tackle the significant challenges ahead, including simplifying their hybrid and multi cloud application environments.”

“Given the persistent macro uncertainty and its impact on customer spending, we now expect low-to-mid single-digit revenue growth in fiscal year 2023 with non-GAAP operating margins of approximately 30% and non-GAAP earnings growth of 7% to 11%,” continued Locoh-Donou.

“Our portfolio and roadmap are squarely aligned with our customers’ hybrid and multi-cloud realities and their desire to simplify operations and lower total cost of ownership,” said Locoh-Donou. “Given the current demand environment however, we are taking action to reduce our operating costs while prioritizing initiatives and innovations that will deliver the most benefit to our customers.”

F5 announced today that it is reducing its global headcount by approximately 620 employees, or approximately 9% of its total workforce. These workforce-related actions are expected to be completed by April 21, 2023 with the exception of the Company’s EMEA and parts of its APAC regions where employees will continue the consultation process over the coming weeks, as required by local laws.

https://investors.f5.com/

Seagate agrees $300 Million penalty for selling hard disks to Huawei

Seagate Technology Holdings announced a settlement agreement with the U.S. Department of Commerce’s Bureau of Industry and Security that resolves allegations that Seagate’s sales of hard disk drives to Huawei between August 17, 2020 and September 29, 2021 did not comply with the U.S. Export Administration Regulations.

Under the terms of the settlement agreement, Seagate has agreed to pay $300 million to the U.S. Department of Commerce, to be paid in installments of $15 million per quarter over the course of five years, with the first installment due in October 2023. Additional information regarding the terms of the agreement is included in the Form 8-K that will be filed today with the Securities and Exchange Commission.

“We believe entering this agreement with BIS and resolving this matter is in the best interest of Seagate, our customers and our shareholders,” said Dave Mosley, the Company’s chief executive officer. “Integrity is one of our core values, and we have a strong commitment to compliance as evidenced by our global team of international trade compliance and legal professionals – complemented by external experts and outside counsel. While we believed we complied with all relevant export control laws at the time we made the hard disk drive sales at issue, we determined that engaging with BIS and settling this matter was the best course of action. We are now moving forward fully focused on executing our strong technology roadmap to support the growing demand for mass data storage solutions.”

https://www.seagate.com/news/news-archive/seagate-reaches-resolution-with-the-u-s-department-of-commerces-bureau-of-industry-and-security-pr/

Rambus GDDR6 PHY hits 24 Gbps performance benchmark

The Rambus GDDR6 PHY delivers a market-leading data rate of up to 24 Gigabits per second (Gbps), providing 96 Gigabytes per second (GB/s) of bandwidth per GDDR6 memory device. 

GDDR6 offering enables cost-efficient, high-bandwidth memory performance for AI/ML, graphics and networking applications.

“With the new level of performance achieved by our GDDR6 PHY, designers can deliver the bandwidth needed by the most demanding workloads,” said Sean Fan, chief operating officer at Rambus. “As with our industry-leading HBM3 memory interface, this latest achievement demonstrates our continued commitment to advancing state-of-the-art memory performance to meet the needs of advanced computing applications such as generative AI.”

The Rambus GDDR6 PHY can be combined with Rambus GDDR6 digital controller IP to provide a complete GDDR6 memory interface subsystem solution.

https://www.rambus.com/rambus-accelerates-ai-performance-with-industry-leading-24-gb-s-gddr6-phy/