Showing posts with label OCP. Show all posts
Showing posts with label OCP. Show all posts

Monday, October 24, 2022

OCP: Introducing CXL Memory Pooling


https://youtu.be/3TsYgWaKGuU

Astera Labs has demonstrated the industry's first CXL memory pooling solution to reduce memory stranding, optimize memory utilization and reduce TCO for cloud servers. 

The demo shows how memory pooling can be deployed today with Leo and CXL 1.1-capable 4th Gen Intel Xeon processors.

Presented by Ahmad Danesh, Sr. Director,  Product Management, Astera Labs.

New CXL 3.0 spec doubles the data rate to 64GTs

The CXL Consortium announced the release of the CXL 3.0 specification, doubling the data rate to 64GTs compared to the 2.0 generation.The idea with CXL is to maintain memory coherency between the CPU memory space and memory on attached devices, allowing resource sharing. “Modern datacenters require heterogenous and composable architectures to support compute intensive workloads for applications such as Artificial Intelligence and Machine Learning...

Marvell to acquire Tanzanite for Compute Express Link (CXL)

Marvell agreed to acquire privately-held Tanzanite Silicon Solutions, a start-up based in Milpitas, California that is developing advanced Compute Express Link (CXL) technologies. Terms of the all-cash transaction were not disclosed. Marvell said the future cloud data center will be built on fully disaggregated architecture utilizing CXL technology, requiring greater high-speed interconnectivity than ever combined with optimized compute, networking,...

OCP: Smarter chassis designs for sustainable data centers

Inspur showcased four Open Compute Project (OCP) certified systems at #OCPSummit2022 with a focus on data center sustainability, such as utilizing renewable energy, recycling, thermal reuse, and the use of liquid-cooling technologies to reduce water consumption.

Here are highlights from Alan Chang, VP, Technical Operations, Inspur.

https://youtu.be/uYvtbDN440w

OCP: Massive power efficiency gains for tape vs HDDs

https://youtu.be/5NAr3YPilM8

IBM unveiled its Diamondback Tape Library, a high-density archival storage solution that is physically air-gapped to help protect against ransomware and other cyber threats in hybrid cloud environments.

IBM Diamondback, which can store hundreds of petabytes of data, is designed for both traditional and "new wave" hyperscalers – global enterprises aggregating massive customer data sets. 

In this video, Shawn Brume highlights the platform, including its power efficiency advantage over HDDs in OCP data centers.


OCP: Enfabrica on Rethinking Data Center Fabrics

Openness, efficiency, scalability, and sustainability are key themes at this year's #OCPSummit2022. One of the big challenges ahead is I/O scaling and this perhaps leads to a rethinking of the fabric architecture.

Enfabrica, a Silicon Valley start-up still in stealth-mode, is looking to contribute to the OCP community.

Company founders Roshan Sankar and Shrijeet Mukherjee discuss.

https://youtu.be/--jLr2TM3IU

Thursday, October 20, 2022

Video: OCP adds Sustainability to its Charter

"You can feel the energy". 

Rebecca Weekly, Chair of the Open Computer Project, provides a wrap-up of this year's #OCPSummit20222 in San Jose, California.

Sustainability has become a key tenet for everything at OCP. A new working group was established. Future technologies, including optical interconnects, are being explored. Innovation in data center infrastructure remains the focus.

https://youtu.be/hmfaci8MUBQ

Wednesday, October 19, 2022

Video: Advancing Composable Memory Systems

 

Software-defined memory systems have tremendous potential to accelerate innovation in large data centers.  Manoj Wadekar, Hardware Systems Technologist, Meta, shares a perspective on discussions underway at OCP Summit 2022 in San Jose, California.

Tuesday, October 18, 2022

Meta unveils Grand Teton, its next-gen AI system

At Open Compute Project Summit in San Jose, Meta unveiled Grand Teton, its next-generation, GPU-based hardware platform. Compared to its Zion, its predecessor, Grand Teton boasts 4x the host-to-GPU bandwidth, 2x the compute and data network bandwidth, and 2x the power envelope. Grand Teton also has an integrated chassis in contrast to Zion-EX, which comprises multiple independent subsystems.

Grand Teton has been designed with greater compute capacity to better support memory-bandwidth-bound workloads at Meta, such as its open source DLRMs. Grand Teton’s expanded operational compute power envelope also optimizes it for compute-bound workloads, such as content understanding. 

The previous-generation Zion platform consists of three boxes: a CPU head node, a switch sync system, and a GPU system, and requires external cabling to connect everything. Grand Teton integrates this into a single chassis with fully integrated power, control, compute, and fabric interfaces for better overall performance, signal integrity, and thermal performance. 

This high level of integration dramatically simplifies the deployment of Grand Teton, allowing it to be introduced into data center fleets faster and with fewer potential points of failure, while providing rapid scale with increased reliability.

Meta also introduced Open Rack v3 (ORV3), a data center rack with a frame and power infrastructure capable of supporting a wide range of use cases — including support for Grand Teton.

ORV3’s power shelf isn’t bolted to the busbar. Instead, the power shelf installs anywhere in the rack, which enables flexible rack configurations. Multiple shelves can be installed on a single busbar to support 30kW racks, while 48VDC output will support the higher power transmission needs of future AI accelerators. It also features an improved battery backup unit, upping the capacity to four minutes, compared with the previous model’s 90 seconds, and with a power capacity of 15kW per shelf. Like the power shelf, this backup unit installs anywhere in the rack for customization and provides 30kW when installed as a pair.

https://engineering.fb.com/2022/10/18/open-source/ocp-summit-2022-grand-teton/

OCP adopts Data Center Sustainability as top-level project

The OCP Foundation has adopted Sustainability as a new top-level project and as the 5th Tenet to ensure that all work efforts across all OCP Projects have a focus on sustainability.

OCP’s Sustainability top-level project will set reporting targets, monitor and plan compliance with external regulations and best practices, develop high level sustainability KPIs that go beyond traditional data center measures such as power usage effectiveness (PUE), and assemble sustainability technology and process roadmaps for the larger OCP Community to follow under the OCP’s new Sustainability Tenet.

Sustainability efforts carried out across the OCP will focus on technology specific implementations of the directions set by its new OCP Sustainability Project. 

Currently within the OCP there are sustainability efforts in cooling environments involving increasing thermal management efficiency with liquid based cooling technologies such as immersion and cold plate cooling and heat re-use. Beyond thermal optimizations, data center facilities efforts are working to optimize carbon associated from operations and data center facilities construction, and the manufacturing of IT equipment. 

Designing for circularity is also an important focus to positively control the lifecycle of IT physical infrastructure. For example, firmware needs to be open to promote reuse and improve long term sustainability. OCP hardware specifications will continue to evolve to enable products to remain in use for as long as possible, and design for circularity to enable infrastructure within the data center to be repurposed, and ultimately enable component and material recovery when decommissioned.

"The OCP community has a responsibility to contribute towards reducing the environmental impact of the industry, and drive conversations within their influence to impact technologies deployed in the data centers. The tenets of the OCP foster openness that enables mainstream delivery of the most efficient designs for scalable computing and uniquely positions OCP to be an effective agent for climate action. Adding a mandate for sustainability looking at transparency, circularity, and embodied carbon in IT equipment, silicon and data center facilities will add significant weight to our ability to help the industry minimize its impact on the environment," said George Tchaparian, CEO Open Compute Project Foundation.

https://www.opencompute.org/blog/open-compute-project-foundation-announces-sustainability-as-a-5th-tenet-and-a-top-level-project



Alexander Rakov, Sustainability Leader - C&SP, Schneider Electric, talks about how the OCP community is driving innovation that could achieve huge gains in sustainable performance. #OCPSUMMIT2022

Tuesday, July 19, 2022

OCP releases Bunch of Wires (BoW) spec for chiplet interconnect

The OCP Foundation released its Bunch of Wires (BoW) specification for Chiplet interconnect, representing a next step in the OCP Open Domain Specific Architecture (ODSA) Project's march towards establishing an open Chiplet ecosystem as a catalyst for a new silicon market place and integrated circuit supply chain model. 

BoW specifies a physical layer (PHY) optimized for System on a Chip (SoC) disaggregation, and complements OCP ODSA Open High Bandwidth Interconnect (OpenHBI) PHY specification targeting High Bandwidth Memory and other parallel bandwidth intensive use cases.

The ODSA BoW PHY specification is optimized for both commodity (organic laminate) and advanced packaging technologies, enabling cost and energy efficient, as well as high-performance designs across a wide range of process nodes. The specification was authored to allow many use cases driving significant economies of scale. Care was taken to impose as few constraints as possible and to avoid including required features in the specification that could increase design complexity when disaggregating an existing SoC.

The OCP Foundation notes that its BoW specification follow an open license model. It is already in use in at least 10 companies, including Samsung and NXP, over a dozen different use cases spanning 5, 6, 12, 16, 22 and 65nm process nodes, and covering Chiplet-based products for networking, specialized AI silicon, FPGAs, and processors. 

"The demand for specialized silicon has been increasing steadily due to workload diversity, such as with the adoption of AI and ML, and we expect this trend to continue for several years. In response to this demand the OCP recognizes that it must be a catalyst to establish open and standardized Chiplet ecosystems and new markets by investing in Chiplet interconnect technology that will enable composable silicon. The release of the BoW specification is an important step in this direction. We expect to increase our efforts on developing supply chain models for composable silicon," said Bill Carter, CTO, OCP Foundation.

"The semiconductor industry continues to innovate in new and exciting directions with multicore application specific SoCs, custom core architectures, deep learning, optical communications, analog processing techniques, RF interfaces, memory architectures and more. The new challenge is how to integrate all of these disparate innovations, several of which are not practical to produce at cutting- edge process nodes. Today's announcement from the OCP ODSA, releasing the Bunch of Wires open-source specification for Chiplet interconnect, supplies a new tool toward expanding innovation in the market. This opens the door to a more competitive landscape and diversity in innovation at varying cadences and is fuel or a healthy industry," said Tom Hackenberg, Principal Analyst, Computing & Software Semiconductor, Memory and Computing Division, Yole Intelligence.

http://www.opencompute.org

OCP 2019: New Open Domain-Specific Architecture sub-project 

The Open Compute Project is launching an Open Domain-Specific Architecture (ODSA) sub-project to define an open interface and architecture that enables the mixing and matching of available silicon die from different suppliers onto a single SoC for data center applications. The goal is to define a process to integrate best-of-breed chiplets onto a SoC. Netronome played a lead role initiating the new project. “The open architecture for domain-specific...


Thursday, May 26, 2022

HiWire Consortium to be absorbed into Open Compute Project

The HiWire Consortium standardization efforts will be absorbed into a new Interconnects Sub-Project inside the Open Compute Project Foundation (OCP). All the Active Electrical Cable Specs were contributed to OCP and can be found on the OCP Contribution Database and the associated products are on display on the OCP Marketplace.

The new Interconnects Sub-Project is led by Don Barnetson, Vice President for AEC Products at Credo. 

As a result of this announcement, OCP will have oversight for the ratified HiWire Specification v1.0, which was originally announced in 2020. The new AEC Project inside OCP will build on this specification by adding additional speeds, capabilities and test specifications as driven by OCP membership.

“The HiWire Consortium leads the inaugural effort to bring standardization to a new category of connectivity products – Active Electrical Cables (AECs),” said Sheng Huang, President of the HiWire Consortium. “As AECs move into the mainstream, we feel the broader umbrella of OCP will continue to accelerate standardization efforts and increase industry adoption.”

“Within the next five years, the 650 Group predicts AECs will take over 75% of server connections and displace chassis in hyperscaler data centers,” said Don Barnetson, Vice President for AEC Products at Credo and the OCP AEC project lead. “Standardization within OCP will further accelerate adoption among the hyperscalers and telecom service provider space.”

"OCP’s members see AECs as a critical technology for their customers which include the world’s largest hyperscalers,” said Steve Helvie, Vice President of Channel Development for OCP. “The Community is excited to continue this standardization work and thanks to the HiWire Consortium for its pioneering effort to bring this revolutionary product to market with such broad industry acceptance. We're also very fortunate to have Credo as a certified OCP Solution Provider showcasing AECs on our OCP Marketplace, providing the Community a more comprehensive set of OCP solutions.

“Active Electrical Cables are key to enabling cloud infrastructure interconnect at the density, scale and ultra-low power required to meet our demands for the coming years,” said Gerald Degrace, Senior Director of Next Gen Technology, Microsoft. “As an active member of both the HiWire Consortium and OCP we’re excited by the synergies of standardizing AECs inside OCP’s broad industry scope.”

https://hiwire.org



Credo ships HiWire Active Electrical Cables

Credo announced the production availability of HiWire Active Electrical Cables (AEC). The company says its AECs provide plug and play, deterministic, persistent in-rack and inter-rack connections at lower cost and lower power of alternative optical approaches. Additionally, the AEC family provides system level, in-cable speedshifting providing seamless connectivity of 50G PAM4 enabled switch ports to widely available 25G NRZ based servers. Credo...

HiWire aims for standard Active Electrical Cables at 400G and up

A new HiWire Consortium has been established to pursue the standardization and certification of a new category of Active Electrical Cables (AEC). The group is dedicated to the establishment and ongoing development of an AEC standard that defines a specific implementation of the many industry MSAs and a formal certification process. This will enable an ecosystem of trusted Plug and Play AECs, available from multiple sources, for the hyperscale data...

Wednesday, November 10, 2021

2021 OCP Global Summit - the next 10 years

This year is the tenth anniversary of the Open Compute Project, an organization that has advanced the cause of "vanity free" infrastructure for cloud providers. The mission is broadening and the next 10 years will see advances in additional domains. Here is a 2-minute perspective from Rebecca Weekly, Board Chair, Open Compute Project.

https://youtu.be/xeU8WsVscAs

For more video interviews with industry experts, visit: https://nextgeninfra.io/

Inspur and Samsung build open storage solution for OCP

Inspur Information, which ranks among the world’s top 3 server manufacturers, and Samsung announced a Poseidon V2 E3.x reference system for the Open Compute Project community.

This product adopted composable architecture to maximize the benefits of EDSFF E3.x form factor.  Poseidon V2 system can accommodate not only the PCIe Gen5 SSDs but also various devices like AI/ML accelerators or CXL Memory Expanders.  Data center users can configure the system according to application's needs.


 

“Following the development of the E1.S reference system, we expect that this type of storage solution will become one of the most sought-after and cost-effective storage solutions on the market for leading cloud data center servers and hyperscale companies that operate large data centers,” according to Jongyoul Lee, Executive Vice President of Samsung’s Memory Software Development Team. “We are eager to continue our collaborative work on the E3.S reference system with Inspur to drive further advancements in future server and storage systems.”

“Through our combined vision with Inspur's general purpose server design and Samsung's Poseidon, we believe E1.S and E3.x will bring a revolutionary use case that fulfills the need for an efficient high-performance and high-density storage system,” stated Alan Chang, VP of Technical Operation at Inspur Information. “Customers who use general purpose severs as their compute can smoothly transition to Poseidon whose modularized design will reduce redundant engineering and validation across the board. We anticipate even broader usage models and applications with the new Poseidon v2 specification.”

https://www.inspursystems.com

Tuesday, November 9, 2021

Arista intros 400G Switches at OCP Summit

Arista Networks introduced its next generation of 7050X and 7060X Series switches optimized for 400G networks.

Introduced at OCP 2021, the all new Arista 7388X5 continues the modular system innovation from the 7368X4 and is compliant to the OCP Minipack2 specifications, doubling performance with 30% improved power efficiency. The Arista's 7388X5 shares Meta’s Minipack2 goals of a choice of form factor that supports high density 200G and 400G links. Arista offers a choice of operating systems with enhancements and supports additional use cases with operational efficiency benefits that simplify cloud network designs.

The Arista 7060X5 systems are the highest density 400G options for leaf-spine architectures, offering next-generation performance at the lowest power consumption and leverages the latest Broadcom 25.6Tbps silicon. With up to 64 ports of 400G in 2U, the 7060DX5-64S delivers 10.6Bpps that can be flexibly used in 100G, 200G and 400G environments.

“We are seeing customers of all sizes show interest in the next generation of 400G systems that provide incremental improvements without sacrificing backward compatibility. The Arista 7050X4 and 7358X4 are the latest in the long line of systems built around the Broadcom Trident chipsets that have delivered 20 times the performance increases over the last 10 years,” said Anshul Sadana, Chief Operating Officer at Arista Networks.

The 7050X4 Series 32 x 400G and 7358X4 Series 128 x 100G / 32 x 400G systems enable large enterprise and service provider customers to unlock the potential of 400G. 

Arista says the new systems provide a smooth evolution to higher performance without disruption to existing architectures and increase network capacity by 4 times over the previous generation. All parts of the 7358X4 are field replaceable, simplifying deployment and leveraging the same common equipment and modules as the Arista 7368X4 Series, accelerating the migration to 400G data center networks with up to 32 ports of 400G in 4RU with pay-as-you-grow flexibility.

The 7050X4 and 7358X4 feature:

  • Enhanced network telemetry to detect and address congestion hotspots in real time
  • Traffic management enhancements tuned for RoCE and NVMEoF workloads
  • Support for 10G to 400G to ease the transition to higher performance compute

The new 7050X4 and 7358X4 are available in Q1 2022. The 7050X4 is available in a choice of two port configurations supporting 32 ports of 400G OSFP or QSFP-DD in 1RU. The 7358X4 modular system provides a choice of 25G, 100G and 400G ports.

Arista 7060X5 and 7388X5 doubles performance from the 7060X4 and 7368X4 in a choice of form factors:

  • 64 x 400G in 2U fixed or 128 x 200G in 4U modular systems 
  •  Increase network radix by a factor of 2 or allow the migration to 200G

The 7388X5 and 7060X5 will be available in 1H 2022, with customer testing currently in progress. Pricing starts at $1800 / 400G.

https://www.arista.com/en/company/news/press-release/13400-pr-20211109


Meta deploys Cisco for Wedge400C Top of Rack (TOR) switch

At the tenth annual OCP Summit in San Jose, Cisco confirmed that Meta is deploying its Cisco Silicon One Q200L device along with the Wedge400C Top of Rack (TOR) switch. Cisco Q200L uses 7nm technology to provide a 12.8 Tbps solution for web scale switching and routing. The 12.8 Tbps Wedge400C supports up to 16 ports of 400G and 32 ports of 200G.

Meta worked with Cisco to develop and deploy two new next-generation TOR switches. The latest versions of Meta’s Wedge TOR, the Wedge 400 and 400C, offer higher front panel port density, and greater performance for AI and machine learning applications, while also enabling future expansions. The Wedge 400 and 400C have several improvements over the Wedge 100S, including 4x the switching capacity (upgraded from 3.2 Tbps to 12.8 Tbps), 8x the burst absorption performance, and a field-replaceable CPU subsystem.

“Cisco Silicon One is uniquely positioned in the industry to provide a common architecture across the entire network, enabling massive operational efficiencies for our customers,” said Rakesh Chopra, Cisco Fellow, Common Hardware Group Architecture and Platforming, Cisco. “The Q200L is an important part of Cisco’s expanding Silicon One product family and as part of our overall disagg component model, it provides Meta a building block to innovate on top of, at hyperscale efficiency and scale.”

https://newsroom.cisco.com/press-release-content?type=webcontent&articleId=2207937

ADVA adapts its Oscilloquartz timing for Open Compute Time Appliance Project

ADVA launched its OSA 5400 TimeCard, enabling operators of data center network infrastructure and 5G open RAN architectures to achieve highly accurate and reliable distribution and synchronization of time. 

The new PCIe card, which is built on the OSA 5400 SyncModule, transforms an open compute server into a precise and stable PTP grandmaster, boundary clock, slave clock or NTP server. The OSA 5400 TimeCard is the market’s first solution developed to the framework of the Open Compute Project’s (OCP) Time Appliance Project (TAP) and enhanced with both PTP and NTP functions. With advanced synchronization capabilities, it solves a key challenge for network operators as they virtualize their infrastructure and replace purpose-built hardware with standard servers.

“Timing in data centers and other open compute cases like 5G open RAN is becoming increasingly crucial as a growing number of applications require new levels of performance. But the sub-microsecond synchronization needed for efficient resource sharing is something that open compute servers often cannot provide. Open compute customers need a way to inject the most advanced synchronization capabilities into their white box hardware. That’s why we’ve engineered our OSA 5400 TimeCard™. It brings our experience in network and application synchronization to open compute servers as well as open RAN equipment, enabling a whole new group of customers to benefit from our unique expertise,” said Gil Biran, general manager, Oscilloquartz, ADVA. 

https://www.adva.com/en/newsroom/press-releases/20211109-adva-timing-card-brings-precise-synchronization-to-open-compute-servers

Sunday, July 11, 2021

Open Compute Project names Rebecca Weekly of Intel as new Chair

 The Open Compute Project Foundation (OCP) announced that Rebecca Weekly was elected as chairperson of the Open Compute Project.

Weekly is Vice President, General Manager, and Senior Principal Engineer of Hyperscale Strategy and Execution at Intel Corporation. She replaces Mark Roenigk, Head of Hardware Engineering at Facebook, who served as chair for the past two years. 

Mark Roenigk is a founding board member of OCP and has served on the board since its inception in 2011 and will continue to serve on the board representing Facebook. 

An additional change announced today is the retirement of Rocky Bullock as CEO of the OCP Foundation. Rocky has faithfully served on the Board since its inception, served the OCP Foundation as Chief Financial Officer, and was then named Chief Executive Officer of OCP in 2015. 


Bill Carter, currently serving at OCP’s Chief Technology Officer will assume the role of Executive Director for the Foundation on an interim basis, while the Board of Directors conducts a search for a permanent replacement for Rocky’s position.

"We’re thankful for the 10 years of service and passion from Rocky and Mark that have shaped the pace of innovation and community that OCP has built and will continue to nurture for many years to come,” said Rebecca Weekly.

The 2021 OCP Global Summit is scheduled by the week of November 8 – 10 at the San Jose Convention Center in San Jose, CA. The annual event consists of two days of live sessions, sponsored exhibits, interactive experience centers and live virtual attendance components for those unable to travel. 

https://www.opencompute.org/summit/global-summit 


Tuesday, May 12, 2020

Samsung unveils "ruler form factor" NVMe SSDs for OCP

Samsung announced a solid state drive with an E1.S form factor and full PCIe Gen 4 support.

The new drive, which leverages the production efficiencies of the company’s sixth-generation (1xx-layer), three-bit V-NAND, uses the new form factor to maximize the number of drives possible in a 1-RU chassis.

“Offering the most 1U server-optimized form-factor, the PM9A3 will improve space utilization, add PCIe Gen4 speeds, enable increased capacity and more,” said Mr. Jongyoul Lee, senior vice president of Samsung’s Memory Software Development Team at the Open Compute Project Virtual Global Summit. “We see it eventually becoming the most sought-after storage solution on the market for tier one and tier two cloud datacenter servers, and one of the more cost-effective,” he added.

The newly announced PM9A3 drive, to be available in three versions, is expected to feature a PCIe Gen 4 (x4) interface for more than twice the sequential read performance of PCIe Gen 3 (3200MB/s), and include dedicated hardware accelerators for nearly twice the random writes (180,000 IOPs) of the previous generation. Capacities will range from 960 GB to 7.68 TB.


Wiwynn unveils liquid-cooled Open Rack for OCP

Taiwan-based Wiwynn unveiled a standalone rack-level, liquid cooling solution for next-generation Open Compute Project (OCP) servers.

Wiwynn’s advanced liquid cooling solution supports up to 36kW per rack and enables high power component usage at L10 level. The system utilizes the rear door heat exchanger (RDHx) to cool the liquid which transferred the heat of high-power components (CPU, GPU or ASIC) from cold plates.

Wiwynn says its. design enables a standalone system that requires no extra facility coolant and infrastructure changes.

The rack-level cooling solution design will support the Open Compute Project (OCP) Open Rack Standard V3 (ORV3) spec and is backward compatible with OCP ORV2. Both existing and future OCP systems can benefit from this high efficiency cooling system. The blind mate quick disconnect (QD), easy assembling cold plate designs plus the independent rack level cooling control system enhance serviceability and management.

“We have witnessed the power consumption of data center IT systems surging year over year for the flourish of cloud and AI applications,” said Dr. Sunlai Chang, Senior Vice President and CTO of Wiwynn. “We are proud to introduce our innovative standalone rack level liquid cooling solution. It assists data centers to face the challenge of increasing power density while requiring no infrastructure changes as well as providing enhanced serviceability and management.”

“It’s great to work with our partner—Wiwynn to bring this novel liquid cooling solution into data centers with high cooling efficiency," said Steve Mills, Mechanical Engineer at Facebook. “By leveraging Wiwynn’s development experience in OCP, the design will help expand ORV3 to higher power density applications and accelerate the adoption of liquid cooling in the open community.”

Sunday, March 1, 2020

OCP Summit in San Jose is cancelled

The Open Compute Project Foundation (OCP) has decided to cancel the OCP Global Summit due to the COVID-19 situation. The event was scheduled to take place March 3-5 at the San Jose Convention Center in California. Also canceled were associated events including the Future Technology Symposium, the OCP SONiC/SAI Pre-Summit Workshop, and the Open System Firmware Hack event.

The OCP Summit is an annual event with an active and broad community following.

https://www.opencompute.org/summit/global-summit

Sunday, September 29, 2019

AT&T contributes Distributed Disaggregated Chassis white box to OCP

AT&T has contributed its specifications for a Distributed Disaggregated Chassis (DDC) white box architecture to the Open Compute Project (OCP). The contributed design aims to define a standard set of configurable building blocks to construct service provider-class routers, ranging from single line card systems, a.k.a. “pizza boxes,” to large, disaggregated chassis clusters.  AT&T said it plans to apply the design to the provider edge (PE) and core routers that comprise its global IP Common Backbone (CBB).

“The release of our DDC specifications to the OCP takes our white box strategy to the next level,” said Chris Rice, SVP of Network Infrastructure and Cloud at AT&T. “We’re entering an era where 100G simply can’t handle all of the new demands on our network. Designing a class of routers that can operate at 400G is critical to supporting the massive bandwidth demands that will come with 5G and fiber-based broadband services. We’re confident these specifications will set an industry standard for DDC white box architecture that other service providers will adopt and embrace.”

AT&T’s DDC white box design, which is based on Broadcom’s Jericho2 chipset, calls for three key building blocks:

  • A line card system that supports 40 x 100G client ports, plus 13 400G fabric-facing ports.
  • A line card system that support 10 x 400G client ports, plus 13 400G fabric-facing ports.
  • A fabric system that supports 48 x 400G ports. A smaller, 24 x 400G fabric systems is also included.

AT&T points out that the line cards and fabric cards are implemented as stand-alone white boxes, each with their own power supplies, fans and controllers, and the backplane connectivity is replaced with external cabling. This approach enables massive horizontal scale-out as the system capacity is no longer limited by the physical dimensions of the chassis or the electrical conductance of the backplane. Cooling is significantly simplified as the components can be physically distributed if required. The strict manufacturing tolerances needed to build the modular chassis and the possibility of bent pins on the backplane are completely avoided.

Four typical DDC configurations include:

  • A single line card system that supports 4 terabytes per second (Tbps) of capacity.
  • A small cluster that consists of 1 plus 1 (added reliability) fabric systems and up to 4 line card systems. This configuration would support 16 Tbps of capacity.
  • A medium cluster that consists of 7 fabric systems and up to 24 line card systems. This configuration supports 96 Tbps of capacity.
  • A large cluster that consists of 13 fabric systems and up to 48 line card systems. This configuration supports 192 Tbps of capacity.
  • The links between the line card systems and the fabric systems operate at 400G and use a cell-based protocol that distributes packets across many links. The design inherently supports redundancy in the event fabric links fail.


“We are excited to see AT&T's white box vision and leadership resulting in growing merchant silicon use across their next generation network, while influencing the entire industry,” said Ram Velaga, SVP and GM of Switch Products at Broadcom. “AT&T's work toward the standardization of the Jericho2 based DDC is an important step in the creation of a thriving eco-system for cost effective and highly scalable routers.”   

“Our early lab testing of Jericho2 DDC white boxes has been extremely encouraging,” said Michael Satterlee, vice president of Network Infrastructure and Services at AT&T. “We chose the Broadcom Jericho2 chip because it has the deep buffers, route scale, and port density service providers require. The Ramon fabric chip enables the flexible horizontal scale-out of the DDC design. We anticipate extensive applications in our network for this very modular hardware design.”

https://about.att.com/story/2019/open_compute_project.html

Broadcom's Jericho2 switch-routing chip boasts 10 Tbps capacity

Broadcom announced commercial availability of its Jericho2 and FE9600 chips, the next generation of its StrataDNX family of system-on-chip (SoC) Switch-Routers.

The Jericho2 silicon boasts 10 Terabits per second of Switch-Router performance and is designed for high-density, industry standard 400GbE, 200GbE, and 100GbE interfaces. Key features include the company's "Elastic Pipe" packet processing, along with large-scale buffering with integrated High Bandwidth Memory (HBM).

The new device is shipping within 24 months from its predecessor Jericho+., Jericho2 delivers 5X higher bandwidth at 70% lower power per gigabit.

In addition to Jericho2, Broadcom is shipping FE9600, the new fabric switch device with 192 links of the industry's best performing and longest-reach 50G PAM-4 SerDes. This device offers 9.6 Terabits per second fabric capacity, a delivers 50% reduction in power per gigabit compared to its predecessor FE3600.

“The Jericho franchise is the industry’s most innovative and scalable silicon used today in various Switch-Routers by leading carriers,” said Ram Velaga, Broadcom senior vice president and general manager, Switch Products. “I am thrilled with the 5X increase in performance Jericho2 was able to achieve over a single generation. Jericho2 will accelerate the transition of carrier-grade networks to merchant silicon-based systems with best-in-class cost/performance.”

Arrcus scales out with Broadcom's Jericho2, raises $30m 

Arrcus, a start-up that offers a hardware-agnostic network operating system for white boxes switches, announced multiple high-density 100GbE and 400GbE routing solutions for hyperscale cloud, edge, and 5G networks.

The company says its ArcOS software architecture has the foundational attributes to scale-out to an open aggregated routing solution, enabling operators to design, deploy, operationalize, and manage their infrastructure across multiple domains in the network.

"Our mission is to democratize the networking industry by providing best-in-class software, the most flexible consumption model, and the lowest total cost of ownership for our customers; we are now extending this by providing leading-edge open integration solutions for routing. ArcOS is the essential link to fully realize the unparalleled advancements in the 10Tbps Jericho2 SoC family and the resulting systems," Devesh Garg, co-founder and CEO of Arrcus.


The new ArcOS-based platforms, based on Broadcom’s 10Tbps, highly-flexible and programmable StrataDNX Jericho2 switch-router system-on-a-chip (SoC), include:

  • 24 ports of 100G + 6 ports of 400G
  • 40 ports of 100G
  • 80 ports of 100G
  • 96 ports of 100G