Wednesday, June 12, 2024

Samsung Foundry targets CPO-integrated AI silicon by 2027

Samsung introduced two new advanced semiconductor nodes, SF2Z and SF4U, and unveiled its integrated Samsung AI Solutions platform, leveraging the strengths of its Foundry, Memory, and Advanced Package (AVP) businesses. At its annual Samsung Foundry event in San Jose, California, Samsung executives emphasized the company’s unique ability to company the latest process node, advanced packaging, and high-bandwidth memory (HBM) for customers targeting the latest AI silicon designs.

  • The SF2Z, Samsung’s latest 2nm process, features an optimized backside power delivery network (BSPDN) technology. This innovation places power rails on the wafer’s backside to reduce bottlenecks between power and signal lines, enhancing power, performance, and area (PPA) compared to the first-generation SF2 node. The SF2Z process is scheduled for mass production in 2027. Meanwhile, the SF4U is a high-value 4nm variant offering PPA improvements through optical shrinkage, with mass production set for 2025.
  • Samsung also confirmed that its preparations for the SF1.4 (1.4nm) process are on track, aiming for mass production in 2027. The company is focusing on material and structural innovations to develop future technologies below 1.4nm.

Additionally, Samsung plans to launch an all-in-one, co-packaged optics (CPO) integrated AI solution in 2027, designed to offer high-performance, low-power semiconductors optimized for AI applications. This initiative is part of Samsung’s broader strategy to provide comprehensive AI solutions.

"At a time when numerous technologies are evolving around AI, the key to its implementation lies in high-performance, low-power semiconductors," said Dr. Siyoung Choi, President and Head of Foundry Business at Samsung Electronics. "Alongside our proven GAA process optimized for AI chips, we plan to introduce integrated, co-packaged optics (CPO) technology for high-speed, low-power data processing, providing our customers with the one-stop AI solutions they need to thrive in this transformative era."

Around 30 partner companies exhibited at booths, further highlighting the dynamic collaboration across the Samsung Foundry ecosystem. 


  • Cadence Design Systems has announced a collaboration with Samsung Foundry to advance technology for AI and 3D-IC semiconductor design on Samsung's gate-all-around (GAA) nodes. This partnership aims to enhance development for applications such as AI, automotive, aerospace, hyperscale computing, and mobile. Key achievements include the use of Cadence.AI to reduce leakage power by over 10% on the SF2 GAA platform and the certification of a complete Cadence backside implementation flow for Samsung's SF2 node. This collaboration has led to successful development and validation of a test chip, demonstrating readiness for advanced design implementations.
  • Synopsys announced that its AI-driven digital and analog design flows have been certified on Samsung Foundry's SF2 process, with multiple test chip tapeouts. Utilizing the Synopsys.ai™ full-stack EDA suite, these reference flows improve performance, power, and area (PPA), enhance productivity, and speed up analog design migration for Samsung's Gate-All-Around (GAA) process technologies. The certification was achieved through Synopsys' AI-driven design technology co-optimization (DTCO) solution, which provided superior PPA outcomes. These techniques will also be applied to Samsung's upcoming SF1.4 process.
  • Samsung Electronics is investing $17 billion in a new semiconductor fabrication facility in Taylor, Texas. The 1.1 million square foot fab will use advanced 3-nanometer process technology and is expected to start production in the second half of 2024. The facility will have a capacity of around 170,000 wafers per month, making it one of the largest and most advanced semiconductor manufacturing facilities in the world. The project is expected to create around 1,800 new jobs in the region.

false

NVIDIA Triples LLM Performance with H100 GPUs + Quantum-2 InfiniBand

 NVIDIA has achieved a remarkable milestone by more than tripling the performance on the large language model (LLM) benchmark, based on GPT-3 175B, compared to its record-setting submission from last year. This feat was accomplished using an AI supercomputer featuring 11,616 NVIDIA H100 Tensor Core GPUs, interconnected with NVIDIA Quantum-2 InfiniBand networking. The enhanced performance is attributed to the larger scale—more than triple the 3,584 H100 GPUs used previously—and extensive full-stack engineering improvements.

The scalability of the NVIDIA AI platform with InfiniBand networking, allows for significantly faster training of massive AI models like GPT-3 175B. This advancement translates into substantial business opportunities. For instance, NVIDIA’s recent earnings call highlighted how LLM service providers can achieve a sevenfold return on investment over four years by running the Llama 3 70B model on NVIDIA HGX H200 servers. This assumes a service charge of $0.60 per million tokens, with an HGX H200 server capable of processing 24,000 tokens per second.

The NVIDIA H200 Tensor Core GPU, building on the Hopper architecture, includes 141GB of HBM3 memory and over 40% more memory bandwidth than its predecessor, the H100. In its MLPerf Training debut, the H200 demonstrated up to a 47% performance increase compared to the H100. Additionally, software optimizations have made NVIDIA’s 512 H100 GPU configurations up to 27% faster than last year. This showcases the power of continuous software enhancements in boosting performance, even with the same hardware.

Some highlights:

  • AI Supercomputer: 11,616 NVIDIA H100 Tensor Core GPUs connected with NVIDIA Quantum-2 InfiniBand.
  • Performance Gains: Tripled LLM benchmark performance over last year’s submission.
  • H200 GPU: Features 141GB HBM3 memory and 40% more memory bandwidth than H100.
  • Software Optimizations: 512 H100 GPU configurations now 27% faster than a year ago.
  • Scalability: GPU count increased from 3,584 to 11,616.
  • LLM Service ROI: Potential sevenfold return on investment with Llama 3 70B on HGX H200 servers.
  • Stable Diffusion and GNN Training: Up to 80% performance boost for Stable Diffusion v2 and significant gains in GNN training.
  • Broad Support: Participation from industry leaders like ASUS, Dell, HPE, Lenovo, and others in NVIDIA’s AI ecosystem.

false

Oracle Cloud Infrastructure to provide capacity for OpenAI

OpenAl will extend its Microsoft Azure Al platform using Oracle Cloud Infrastructure (OCI). The agreement will provide additional capacity for OpenAl.

“We are delighted to be working with Microsoft and Oracle. OCI will extend Azure’s platform and enable OpenAI to continue to scale,” said Sam Altman, Chief Executive Officer, OpenAI.

“The race to build the world’s greatest large language model is on, and it is fueling unlimited demand for Oracle’s Gen2 AI infrastructure,” said Larry Ellison, Oracle Chairman and CTO. “Leaders like OpenAI are choosing OCI because it is the world’s fastest and most cost-effective AI infrastructure.”

Oracles notes that its OCI Supercluster can scale up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage. OCI Compute virtual machines and OCI’s bare metal NVIDIA GPU instances can power applications for generative AI, computer vision, natural language processing, recommendation systems, etc.

https://www.oracle.com/news/

Google Cloud and Oracle Cloud Infrastructure announce partnership

 Oracle and Google Cloud have announced a partnership to integrate Oracle Cloud Infrastructure (OCI) and Google Cloud technologies. This collaboration will initially allow customers to deploy general-purpose workloads with no cross-cloud data transfer charges in 11 global regions through Google Cloud’s Cross-Cloud Interconnect. Later this year, the partnership will introduce Oracle Database@Google Cloud, offering the highest level of Oracle database and network performance with feature and pricing parity with OCI.

Oracle will manage Oracle database services within Google Cloud data centers globally, starting with North America and Europe. Services like Oracle Exadata Database Service, Oracle Autonomous Database Service, and Oracle Real Application Clusters (RAC) will launch later this year in four regions: US East (Ashburn), US West (Salt Lake City), UK South (London), and Germany Central (Frankfurt). These services will rapidly expand to additional regions worldwide, providing enterprises with robust database solutions directly within Google Cloud.

This partnership will allow customers to deploy workloads across both OCI and Google Cloud regions without incurring cross-cloud data transfer charges. Initially available in 11 regions, including Australia East (Sydney), Australia South East (Melbourne), Brazil East (São Paulo), Canada South East (Montreal), Germany Central (Frankfurt), India West (Mumbai), Japan East (Tokyo), Singapore, Spain Central (Madrid), UK South (London), and US East (Ashburn), the service will expand to more regions over time. This integration provides a low-latency, high-throughput, private connection between the two leading cloud providers, ensuring seamless interoperability and enhanced performance.

Key Points:

  • Oracle and Google Cloud are partnering to integrate OCI and Google Cloud technologies.
  • The collaboration will initially allow general-purpose workloads with no cross-cloud data transfer charges in 11 global regions.
  • Oracle Database@Google Cloud will be introduced later this year, offering high performance and feature parity with OCI.
  • Oracle will manage its database services within Google Cloud data centers globally, starting in North America and Europe.
  • Oracle Exadata, Autonomous Database, and RAC services will launch in US East, US West, UK South, and Germany Central regions.
  • The services will rapidly expand to additional regions worldwide.
  • Customers can deploy workloads across both OCI and Google Cloud regions with no cross-cloud data transfer charges.
  • The initial 11 regions include locations in Australia, Brazil, Canada, Germany, India, Japan, Singapore, Spain, the UK, and the US.

https://www.oracle.com/news/announcement/oracle-and-google-cloud-announce-groundbreaking-multicloud-partnership-2024-06-11/?source=:so:li:or:awr:ocorp:::&SC=:so:li:or:awr:ocorp:::&pcode=

MEF and TM Forum to align APIs and service models

MEF and TM Forum announced a joint initiative to align their respective API and product and service models. This collaboration aims to streamline automation for MEF Network-as-a-Service (NaaS) implementations across a global partner ecosystem. The initiative will integrate MEF’s Lifecycle Service Orchestration (LSO) APIs with TM Forum’s Gen5 Open API standards, using Domain Context Specialization (DCS) to support standardized MEF-defined models. The goal is to create a unified, standardized approach that enhances automation within MEF’s NaaS domain by the end of 2025.

MEF NaaS services include a range of on-demand connectivity options such as Carrier Ethernet and IP, application assurance through SD-WAN and E2E network slicing, and cybersecurity capabilities including SASE, SSE, and ZTNA. Additionally, these services extend to multi-cloud environments for multi-access edge computing (MEC) and cloud connectivity. MEF will design its product and service models to conform to TM Forum’s Gen5 API DCS design patterns and governance, ensuring compatibility and streamlined automation.

TM Forum will contribute its suite of automation assets, including the Open Digital Architecture, Gen5 Domain Context Specialization APIs, and the NaaS TMF909 API suite. These assets will provide an abstraction layer that maps NaaS services to network resources regardless of vendor implementation. This collaboration addresses the digital infrastructure sector’s current challenge of proprietary and non-standard API implementations, which hinder scalability and automation.

By standardizing APIs, the initiative aims to reduce market confusion, cut capex and opex costs, and accelerate time to revenue. MEF’s LSO APIs currently facilitate extensive interoperability and automation of business and operational functions among ecosystem partners. With more than 160 service providers adopting MEF APIs and TM Forum’s portfolio of over 80 Open APIs achieving widespread industry adoption, this collaboration is set to advance digital services for enterprises through industry-wide standardization.


Key Points:

  • MEF and TM Forum are collaborating to align their API and service models.
  • The initiative targets automation for MEF NaaS implementations across a global partner ecosystem.
  • MEF will conform its LSO APIs to TM Forum’s Gen5 Open API standards using Domain Context Specialization (DCS).
  • TM Forum will provide its Open Digital Architecture, Gen5 Domain Context Specialization APIs, and NaaS TMF909 API suite.
  • Standardized APIs will address the issue of non-scalable proprietary implementations.
  • MEF and TM Forum’s collaboration will deliver a full API suite for automation by the end of 2025, benefiting over 160 service providers and leveraging TM Forum’s widely adopted Open APIs.

“Enterprises expect the same agility in network services from NaaS that they experience with cloud. MEF’s widely adopted LSO API product and service schemas are crucial to meeting these expectations,” said Pascal Menezes, CTO, MEF. “Our expanded collaboration with TM Forum to align on common APIs and data models empowers service providers to automate the full lifecycle delivery of complex multi-provider NaaS services with the ease, agility and responsiveness businesses expect in the cloud era.”

“True end-to-end automation requires industry-wide collaboration on common specifications to enable automated service delivery across the broader telecommunications ecosystem,” said George Glass, CTO, TM Forum. “Our work with MEF serves as a blueprint for other domains facing similar multi-partner integration challenges as we continue collaborating to drive industry-wide standardization.”

https://www.mef.net/news/mef-and-tm-forum-unify-apis-and-service-models-to-automate-ecosystem-for-mef-naas-services/?utm_campaign=Press%20Releases&utm_content=296653956&utm_medium=social&utm_source=twitter&hss_channel=tw-244834042

Broadcom posts revenue of $12.5B, up 12% yoy excluding VMware

Broadcom reported revenue of $12,487 million for its second quarter of fiscal year 2024, ended May 5, 2024, provided guidance for its fiscal year 2024 and announced its quarterly dividend.

GAAP net income was $2,121 million for the second quarter, and Non-GAAP diluted EPS was $10.96. Quarterly common stock dividend was $5.25 per share.

Consolidated revenue grew 43% year-over-year to $12.5 billion, including the contribution from VMware, and was up 12% year-over-year, excluding VMware. Adjusted EBITDA increased 31% year-over-year to $7.4 billion," said Kirsten Spears, CFO of Broadcom Inc. "Free cash flow, excluding restructuring and integration in the quarter, was $5.3 billion, up 18% year-over-year. Today we are announcing a ten-for-one forward stock split of Broadcom's common stock, to make ownership of Broadcom stock more accessible to investors and employees."

Broadcom also announced a ten-for-one forward stock split.


https:/www.broadcom.com


false