Sunday, June 2, 2024

NVIDIA scales its Spectrum-X Ethernet networking platform

At COMPUTEX 2024 in Taiwan, NVIDIA's Jensen Huang unveiled a roadmap for new semiconductors that will arrive on a one-year rhythm. 

The Rubin platform will succeed the upcoming Blackwell platform, featuring new GPUs, a new Arm-based CPU — Vera — and advanced networking with NVLink 6, CX9 SuperNIC and the X1600 converged InfiniBand/Ethernet switch.

“Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and push everything to technology limits,” Huang explained.

Spectrum-X features the NVIDIA Spectrum SN5600 Ethernet switch and the NVIDIA BlueField -3 SuperNIC. The platform leverages adaptive routing and congestion control for maximum bandwidth and noise isolation. It enables advanced cloud multi-tenancy, GPU compute elasticity and zero-trust security.

NVIDIA will launch new Spectrum-X products every year, delivering increased bandwidth and ports and enhanced software feature sets and programmability.  NVIDIA claims Spectrum-X accelerates generative AI network performance by 1.6x over traditional Ethernet fabrics. Data center networks based Spectrum-X switches are currently designed connecting for tens of thousands. With 1.6 Tbps interfaces on the horizon this will soon increase to millions of GPUs.

The next generation NVLink Switch chip will feature:

  • 50B transistors and use TSMC's 4NP 
  • 72-ports of 400G SerDes
  • 4 NVLinks at 1.8 TB/s
  • 7.2 TB/s full-duplex bandwidth

Another key networking aim for NVIDIA is bringing the capabilities of Infiniband to Ethernet for hyperscale data centers, including

  • Network-level RDMA
  • Congestion control using the switch telemetry
  • Adaptive routing using the Bluefield NICs
  • Noise isolation between training models





NVIDIA cited rapid adoption of its Spectrum-X Ethernet networking platform. 

CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global and Yotta are among the first AI cloud service providers embracing NVIDIA Spectrum-X. Additionally, several NVIDIA partners have announced Spectrum-based products, including ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Wistron and Wiwynn, which join Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro in incorporating the platform into their offerings.


“Rapid advancements in groundbreaking technologies like generative AI underscore the necessity for every business to prioritize networking innovation to gain a competitive edge,” said Gilad Shainer, senior vice president of networking at NVIDIA. “NVIDIA Spectrum-X revolutionizes Ethernet networking to let businesses fully harness the power of their AI infrastructures to transform their operations and their industries.”

https://nvidianews.nvidia.com/news/nvidia-supercharges-ethernet-networking-for-generative-ai

https://www.youtube.com/watch?v=pKXDVsWZmUU&t=4772s

AMD scales its GPU accelerators for AI

During an opening keynote at COMPUTEX in Taiwan, AMD's CEO, Dr. Lisa Su, unveiled the company's next gen Instinct MI325X GPU accelerator with up to 288GB of HBM3E memory for release later this year. AMD will pursue an annual cadence for new product releases.

Following the Instinct MI325X, the next AMD Instinct MI350 series, powered by the new AMD CDNA 4 architecture, is expected to be available in 2025 bringing up to a 35x increase in AI inference performance compared to AMD Instinct MI300 Series with AMD CDNA 3 architecture. Expected to arrive in 2026, the AMD Instinct MI400 series is based on the AMD CDNA “Next” architecture.

“The AMD Instinct MI300X accelerators continue their strong adoption from numerous partners and customers including Microsoft Azure, Meta, Dell Technologies, HPE, Lenovo and others, a direct result of the AMD Instinct MI300X accelerator exceptional performance and value proposition,” said Brad McCredie, corporate vice president, Data Center Accelerated Compute, AMD. “With our updated annual cadence of products, we are relentless in our pace of innovation, providing the leadership capabilities and performance the AI industry and our customers expect to drive the next evolution of data center AI training and inference.”

Finally, AMD highlighted the demand for AMD Instinct MI300X accelerators continues to grow with numerous partners and customers using the accelerators to power their demanding AI workloads, including:

  • Microsoft Azure using the accelerators for Azure OpenAI services and the new Azure ND MI300X V5 virtual machines.
  • Dell Technologies using MI300X accelerators in the PowerEdge XE9680 for enterprise AI workloads.
  • Supermicro providing multiple solutions with AMD Instinct accelerators.
  • Lenovo powering Hybrid AI innovation with the ThinkSystem SR685a V3
  • HPE is using them to accelerate AI workloads in the HPE Cray XD675.


New subsea cable to link Singapore with Batam, Indonesia

Singtel and PT Telekomunikasi Indonesia International (Telin) agreed to develop a new submarine cable system connecting Singapore and Batam, Indonesia, under the newly formed INSICA (Indonesia Singapore Cable System) Consortium.

When operational in the fourth quarter of 2026, the 100-km INSICA cable system will support the surge in data centre telecommunications traffic between Singapore and Batam. INSICA will feature a 24-fiber pair subsea cable and two diverse terrestrial cable paths, offering a maximum capacity of up to 20 terabits per second per fiber pair. This will deliver exceptional bandwidth, seamless connectivity and robust network security and enable efficient resource sharing and scalability. The new diverse link provided by INSICA will enhance network protection and reliability, ensuring uninterrupted 24/7 operations for data centres.

Mr Ooi Seng Keat, Vice President of Digital Infrastructure & Services at Singtel, stated, “Batam is emerging as a prime location for data centres due to its close proximity to Singapore. With this cable system, we’ll be able to enhance the connectivity between the countries to support the intensive, higher power density AI workloads of enterprises and cloud companies. The development of the INSICA cable system is yet another step that we’re taking in architecting a hyper-connected, digital ecosystem to serve the long-term demands of the region’s digital future and boost the regional economy.”


STM plans high-volume 200mm silicon carbide fab in Italy

STMicroelectronics has unveiled plans to establish a new high-volume silicon carbide (SiC) manufacturing facility in Catania, Italy. This facility will be part of ST’s Silicon Carbide Campus, designed for mass production of SiC power devices and modules, including a full range of manufacturing processes from substrate to packaging.

Key points include:

  • Integration of Facilities: The new facility will work alongside an SiC substrate manufacturing site to create a fully integrated manufacturing hub for SiC on a single site.
  • Silicon Carbide Campus Features: The campus will encompass all production stages, including SiC substrate development, epitaxial growth, and front-end wafer fabrication using 200mm technology. It will also handle module assembly and have extensive R&D and design capabilities.
  • Production and Investment: Production is expected to start in 2026, with full capacity reached by 2033, producing up to 15,000 wafers weekly. The total investment is projected at about five billion euros, with around two billion euros in support from the Italian government under the EU Chips Act.
  • Sustainability Commitment: The campus design and operations will emphasize sustainable practices, particularly in resource consumption like water and power.

 


“The fully integrated capabilities unlocked by the Silicon Carbide Campus in Catania will contribute significantly to ST’s SiC technology leadership for automotive and industrial customers through the next decades,” said Jean-Marc Chery, President and Chief Executive Officer of STMicroelectronics. “The scale and synergies offered by this project will enable us to better innovate with high-volume manufacturing capacity, to the benefit of our European and global customers as they transition to electrification and seek more energy efficient solutions to meet their decarbonization goals.”

 


FCC marks the end of the Affordable Connectivity Program

Due to a lack of additional Congressional funding, the FCC officially ended the Affordable Connectivity Program (ACP).  

During the wind-down of the ACP, Chairwoman Rosenworcel sent monthly letters to Congress emphasizing the program’s importance and the need for additional funding. In a recent letter, she highlighted the nationwide necessity to support low-income families struggling to afford high-speed internet and detailed the Commission’s actions to mitigate the impact of the ACP’s conclusion on enrolled households. 

“The Affordable Connectivity Program filled an important gap that provider low-income programs, state and local affordability programs, and the Lifeline program cannot fully address,” said Chairwoman Rosenworcel. “The Commission is available to provide any assistance Congress may need to support funding the ACP in the future and stands ready to resume the program if additional funding is provided.”

Agency wind-down measures have included: (1) encouraging ACP providers, for which participation in the ACP was voluntary, to develop low-income programs of their own and to provide their ACP subscribers information on their low-income programs or low-cost plans; (2) offering training and resources for state public utility commissions and agency ACP grantees and outreach partners to raise awareness of the Commission’s Lifeline program; (3) and reminding current Lifeline providers of their requirement to publicize the Lifeline program.  

The Lifeline program offers a $9.25 monthly benefit on broadband service for eligible households. Although the Lifeline benefit may alleviate some financial pressure for certain ACP households, it is not a replacement for the ACP.  Not all ACP households will qualify for Lifeline, and by statute, many ACP providers are not eligible to participate in the Lifeline program.

Some highlights on the Affordable Connectivity Program: 

  • Over 23 million households were enrolled in the ACP at the time the program stopped accepting new enrollments. 
  • The ACP served households in every county in the United States.  
  • The participation among households in Tribal areas increased by 136 percent, with approximately 330,000 Tribal subscribers enrolled in the program when the enrollment freeze took effect.
  • It provided low-income households consistent connectivity.  In response to the FCC’s winter 2023 agency survey, 68% of ACP households reported they had inconsistent connectivity or zero connectivity at all before ACP.   
  • Roughly 15 percent of all households in the program are from rural areas. 
  • According to a national survey, more than four million households with an active or former military member are enrolled in the ACP.  
  • Nearly half of ACP households are led by someone over the age of 50. 
  • Approximately 3.4 million households seeking to enroll in the ACP indicated participation in the National School Lunch or Breakfast Programs as one of the ways they qualify for the ACP.


Thursday, May 30, 2024

Photonic Inc and Microsoft: Quantum entanglement at telecom wavelengths

Photonic Inc., a start-up developing distributed quantum computing in silicon, cited a significant milestone: entanglement between modules. 

Microsoft, which has been collaborating with Photonic since last November, said the "accomplishment demonstrates that existing telecommunication networks have the potential to enable long-distance quantum communications—the foundation for a quantum internet and distributed quantum computing."  

Photonic’s approach is based on optically linked silicon spin qubits with a native telecom networking interface, meaning that it can integrate with the infrastructure, platforms, and scale of today’s global telecommunications networks, including the Microsoft Azure cloud. Three demonstrations, culminating in the teleported CNOT gate sequence, established and consumed distributed quantum entanglement—entanglement between qubits not adjacent to one another or even in the same cryostat.

“The crucial role that entanglement distribution will play in unlocking the commercial promise of quantum computing cannot be overstated. Large-scale quantum algorithms running across multiple quantum computers require enormous amounts of distributed entanglement to work well,” said Dr. Stephanie Simmons, Founder and Chief Quantum Officer at Photonic. “These demonstrations highlight the promise of our distinctive architectural approach to solve the challenge of scaling beyond single nodes. While there is still much work ahead, it’s important to acknowledge the pivotal role that entanglement distribution must play in shaping quantum system designs.”

Read Photonic’s scientific paper 

Read Photonic’s new whitepaper Distributed Quantum Computing in Silicon: Entanglement Between Modules

Read the Microsoft Azure Quantum Blog

Photonic Inc builds its Quantum team

Photonic Inc., a start-up based in Vancouver that is advancing distributed quantum computing in silicon, announced key additions to its research team:Dr. Chantal Arena joins as vice president of research, development, and production – devices. Arena has 35 years' experience in business strategy, materials science, and semiconductor devices fabrication, most recently as co-chief executive and technical officer at Lawrence Semiconductors. She holds...

Photonic Inc. unveils its quantum in silicon, $140m in funding

Photonic Inc., a start-up based in Vancouver, unveiled its architecture for scalable, fault-tolerant, and unified quantum computing and networking platforms based on photonically linked silicon spin qubits. The company specializes in spin-photon interfaces in silicon, silicon integrated photonics, and quantum optics. Photonic's technology provides computing (with spin qubits), networking (via photons), and memory. Photonic links in silicon deliver...

false

Marvell intros PCIe retimers based on PAM4

Marvell introduced its Alaska P PCIe retimer product line for data center compute fabrics inside accelerated servers, general-purpose servers, CXL systems and disaggregated infrastructure. 

The first two products, 8- and 16-lane PCIe Gen 6 retimers, connect AI accelerators, GPUs, CPUs and other components inside server systems.  The 16-lane retimer is sampling now to customers and ecosystem partners; the 8-lane product will sample next quarter. 

PCIe retimers are key to supporting the faster inside-server-system connection speeds of PCIe Gen 6 over longer distances.

PCIe Gen 6, which operates at 64 gigatransfers-per-second (GT/s), is the first PCIe standard to use four-level pulse-amplitude modulation (PAM4) signaling, displacing the non-return to zero (NRZ) modulation used for the last 20 years. The higher bandwidth and faster data rate limit the physical reach signals can travel reliably, reducing the distance connections can span. 

Marvell Alaska P retimers address this by compensating for the signal degradations and regenerating the signal to deliver reliable communication over the physical distances required for connections between GPUs and CPUs within an AI server, between GPUs on different boards, or between CPUs and a pool of shared memory enabled by CXL, among other use cases. 

Marvell says its retimers can be used on AI accelerator baseboards, server motherboards, riser cards, or integrated into active electrical cables (PCIe AEC) and active optical cables (PCIe AOC) for emerging multi-rack server system architectures.

The retimers can be used for on-board or cable copper connections or combined with electrical-to-optical components to produce optical PCIe modules, addressing different cloud customer data center architectures. Marvell is working with cable and optical module partners to integrate the products into cloud-optimized interconnect solutions for different data center customer applications.

Key features include:

  • Compatible with PCI Express Gen 6/5/4/3/2/1 and Compute Express Link 3/2/1.1
  • Industry-leading PAM4 SerDes performance
  • Low-latency mode for cache-coherent links
  • Industry’s lowest power consumption (10W PCIe 6 x16)
  • Industry-standard x16 and x8 footprints
  • Advanced telemetry and diagnostics: in-band FEC monitoring, out-of-band SerDes eye monitoring, embedded logic analyzer and software suite for fleet management in large-scale deployments

“Marvell is the industry leader for data center-to-data center, switch-to-switch, rack-to-rack, server-to-switch, and server-to-server connectivity. We are now entering the market for compute fabrics as PCIe and CXL go through an inflection point, migrating from NRZ to PAM4 technology,” said Venu Balasubramonian, vice president of product marketing, Connectivity Business Unit at Marvell. “Marvell is building on more than 10 years of in-house expertise in PAM4 technology and our industry-leading 5nm PAM4 IP portfolio to enable this transition. Our Alaska P PCIe retimer family is an important addition to the comprehensive Marvell accelerated infrastructure portfolio.”

  • Marvell has been a pioneer in PAM4 technology for over a decade, and the new Alaska P PCIe retimer product line expands its PAM4 connectivity portfolio beyond Ethernet and InfiniBand to include copper and optical PCIe, CXL, and proprietary compute fabric links, addressing connections within AI and general-purpose server systems and broadening Marvell’s market reach.
  • In 2023, Marvell introduced Nova, the industry’s first 1.6T PAM4 DSP, along with integrated PAM4 DSPs (Perseus) and efficiency-optimized DSPs (Spica Gen2-T) to support various cloud data center link types and use cases. Additionally, Marvell’s PAM4 technology underpins the Alaska A DSP chips, which are optimized for active electrical cable (AEC) applications.

European Commission approves KKR's acquisition of TIM

 The European Commission has approved unconditionally the acquisition by Kohlberg Kravis Roberts & Co. (KKR), a U.S.-based investment group, of NetCo., which comprises the primary and backbone fixed-line network business of Telecom Italia S.p.A. (‘TIM') as well as FiberCop S.p.A (‘FiberCop'). FiberCop is a joint venture between TIM and KKR comprising TIM's secondary fixed-line network.

The deal, which was announced in November 2023, is valued at EUR 22 billion.

The Commission's investigation into the impact of the acquisition found that:

  • KKR would not have the ability to restrict access to passive services (i.e., infrastructure). For each wholesale product the number of available networks and wholesale providers will stay the same and the market power of NetCo will not materially increase as compared to TIM or FiberCop today. The existing long-term agreements with several access seekers, including Fastweb and Iliad, which have been entered into after the creation of FiberCop in 2021, ensure that KKR will not be able to deteriorate the conditions for wholesale access or terminate such access.
  • The transaction would not increase the likelihood of coordination between NetCo and OpenFiber, given that Fastweb will continue to exert competitive pressure on NetCo and its long-standing competitor, Open Fiber. In addition, NetCo and Open Fiber will likely continue to compete, both to attract new customers and to roll-out fibre network, either in new areas or in each other's areas.

https://ec.europa.eu/commission/presscorner/detail/en/IP_24_2993

Google to build US$2 billion for Cloud region in Malaysia

Google is investing US$2 billion for its first Google data center and Google Cloud region in Malaysia. 

The new facility will be located in the Elmina Business Park in the Greater Kuala Lumpur region.



Marvell posts revenue of $1.161 billion

Marvell Technology reported net revenue for its first quarter of fiscal 2025 of $1.161 billion, $11.0 million above the mid-point of the company's guidance provided on March 7, 2024. GAAP net loss for the first quarter of fiscal 2025 was $(215.6) million, or $(0.25) per diluted share. Non-GAAP net income for the first quarter of fiscal 2025 was $206.7 million, or $0.24 per diluted share. Cash flow from operations for the first quarter was $324.5 million.

"Marvell delivered first quarter fiscal 2025 revenue of $1.161 billion, above the mid-point of guidance, driven by stronger than forecasted demand from AI. Our data center revenue grew 87% year over year, with the start of a ramp in our custom AI programs complementing our substantial base of electro-optics revenue," said Matt Murphy, Marvell's Chairman and CEO. "For the second quarter of fiscal 2025, we are guiding an 8% sequential increase in revenue at the mid-point, fueled by ramping custom AI silicon. We see a favorable setup for the second half of this fiscal year, driven by continued growth in data center and the beginning of a recovery in enterprise networking and carrier infrastructure."

https://www.marvell.com

Wednesday, May 29, 2024

Cignal AI: Optical and routing hardware spending drops 15% in 1Q24

Despite spending from cloud operators, the Optical and Routing Transport equipment market continued to decline in 1Q24, according to a new report from Cignal AI. 

"Cloud Operators continue to provide the financial and technical leadership in the transport equipment market," said Kyle Hollasch, Lead Analyst at Cignal AI. “Based on our discussions within the supply chain, we don’t expect a recovery in spending from traditional service providers until 2025, at the earliest.”

Additional 1Q24 Transport Hardware Report Findings:

  • Spending in North America on Optical Transport equipment declined for the 4th straight quarter, while Routing dropped to levels last seen in 2020.
  • Cloud Operator expenditures hit double digits while traditional service providers continued to cut back. Sales to Cloud & Colo operators exceeded Service Provider spending for the second consecutive quarter. Ciena remains the primary beneficiary of Cloud spending.
  • Network operators in EMEA spent cautiously, apart from buildouts for cloud operators, which grew almost 50%.
  • Chinese operators are upgrading long-haul WDM infrastructure, and the latest contract awards benefitted Huawei, ZTE, and Fiberhome. Routing equipment in the region remains flat.
  • Now that Indian operators have completed a series of 5G related builds, spending in RoAPAC (ex-China and Japan) is in a downturn.

https://cignal.ai/

false

Arista to align compute and network domains as a single managed AI entity

Arista Networks, in collaboration with NVIDIA, hosted a technology demonstration showcasing AI Data Centers that integrate compute and network domains into a single managed AI entity. This initiative aims to help customers configure, manage, and monitor AI clusters uniformly across key components, including networks, NICs, and servers. By demonstrating this unified approach, Arista and NVIDIA highlight the potential for a multi-vendor, interoperable ecosystem that allows for better control and coordination between AI networking and compute infrastructure.

The technology demonstration introduced an Arista EOS-based remote AI agent, which enables the combined AI cluster to be managed as a single solution. With EOS running on the network, this remote AI agent extends its capabilities to servers and SuperNICs, allowing for real-time tracking and reporting of performance issues between hosts and networks. This integration ensures that any performance degradation or failures can be quickly isolated and mitigated, optimizing the end-to-end quality of service (QoS) within the AI Data Center.

As AI clusters and large language models (LLMs) grow in complexity and size, the need for uniform controls across AI servers and network switches becomes critical. The demonstration addressed the challenges of managing disparate components such as GPUs, NICs, switches, optics, and cables. By providing a single point of control and visibility, the Arista EOS-based solution helps prevent misconfigurations and misalignments that can adversely affect job completion times. Additionally, the coordinated management and monitoring of compute and network resources ensure efficient congestion management, minimizing packet drops and optimizing GPU utilization.

Highlights of the demo

  • Collaboration between Arista Networks and NVIDIA for AI Data Centers.
  • Unified management of AI clusters across networks, NICs, and servers.
  • Demonstration of a multi-vendor, interoperable ecosystem.
  • Introduction of an Arista EOS-based remote AI agent.
  • Real-time tracking and reporting of performance issues.
  • Optimization of end-to-end QoS within the AI Data Center.
  • Single point of control and visibility for AI clusters.
  • Efficient congestion management and optimization of GPU utilization.

“Arista aims to improve efficiency of communication between the discovered network and GPU topology to improve job completion times through coordinated orchestration, configuration, validation, and monitoring of NVIDIA accelerated compute, NVIDIA SuperNICs, and Arista network infrastructure,” said John McCool, Chief Platform Officer for Arista Networks.


false

New Subsea Cable to link UK, Netherlands, Germany, Denmark, and Norway

 IOEMA Fibre announced plans for high-capacity, 1400 km repeatered submarine cable system connecting five key northern European markets – the UK, The Netherlands, Germany, Denmark and Norway, supporting critical infrastructure security with full armouring and burial.

The 48-fibre pair system will support a minimum overall capacity of 1.3 Pb/s.

The IOEMA cable system consists of a trunk route, connecting Dumpton Gap, UK with Kristiansand, Norway and three branches, connecting Eemshaven, The Netherlands; Wilhelmshaven, Germany; and Blaabjerg, Denmark. The cable connects with vital transatlantic crossings Havfrue (DK), Leif Erikson (NO) and other planned Trans-Atlantic cables.

The IOEMA system will be the first submarine fibre optic cable landing on the North Sea shores of Germany in over 25 years. After decommissioning of TAT-14, SEA-ME-WE 3 and others, the IOEMA submarine fibre optic cable system will be the only cable system connecting Germany to the submarine cable networks in the North Sea and beyond.


false

AST SpaceMobile signs Verizon for Direct-to-Cellular

AST SpaceMobile announced a strategic partnership with a commitment of $100 million to provide direct-to-cellular satellite service for Verizon.

The partnership will combine Verizon’s terrestrial mobile network, the multi-operator 850 Mhz band and AST SpaceMobile's communications arrays in low Earth orbit. 

The $100 million Verizon commitment includes $65 million ​of​ commercial prepayments, $45 million of which are subject to certain conditions, and $35 million of convertible notes.

“This new partnership with Verizon will enable AST SpaceMobile to target 100% coverage of the continental United States on premium 850 MHz spectrum with two major U.S. mobile operators in the most valuable wireless market in the world, a transformational commercial milestone,” said Abel Avellan, Founder, Chairman, and CEO of AST SpaceMobile. “This partnership will enhance cellular connectivity in the United States, essentially eliminating dead zones and empowering remote areas of the country with space-based connectivity.”

“Verizon has always been strategic and efficient with our spectrum strategy. We use the spectrum entrusted to us to deliver outstanding cellular service for our customers through our terrestrial network. By entering into this agreement with AST, we will now be able to use our spectrum in conjunction with AST’s satellite network to provide essential connectivity in remote corners of the U.S. where cellular signals are unreachable through traditional land-based infrastructure,” said Srini Kalapala, Senior Vice President of Technology and Product Development at Verizon.

AST SpaceMobile now has agreements with more than 45 mobile network operators globally who collectively serve over 2.8 billion existing subscribers. 

AT&T and AST SpaceMobile target space-to-mobiles

AT&T and AST SpaceMobile entered a commercial agreement to provide space-based broadband network direct to everyday cell phones. The deal runs through 2030.This summer, AST SpaceMobile plans to deliver its first commercial satellites to Cape Canaveral for launch into low Earth orbit. These initial five satellites will enable commercial service that was previously demonstrated with several key milestones. Key Takeaways :AT&T and AST SpaceMobile...

AST Mobile Tapes Out Space-to-Cellular ASIC

AST SpaceMobile, which is building the first and only space-based cellular broadband network accessible directly by everyday smartphones, has begun the tape-out phase for its Application-Specific Integrated Circuit (ASIC), in collaboration with TSMC.AST SpaceMobile's BlueBird Block 2 program, the AST5000 ASIC is a novel, custom and low-power architecture developed to enable up to a tenfold improvement in processing bandwidth on each satellite, unlocking...


Arm readies next-gen, on-device AI silicon

Arm introduced its Compute Subsystems (CSS) for Client, a framework that brings together its Armv9 capabilties into production-ready implementations of new Arm CPUs and GPUs on 3nm process nodes.

CSS for Client provides the foundational computing elements for the company's flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Silicon companies can use the Arm CSS framework to create the lates compute solutions for AI smartphones and PCs, delivering Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads, according to the company.

The new Arm Cortex-X925 CPU cluster is Arm’s most powerful and efficient yet. Using advanced 3nm process nodes and running at 3.8GHz with maximum cache size, the Cortex-X925 achieves a 36% increase in single-thread performance over 2023 flagship 4nm SoCs. For AI applications, it provides a remarkable 41% performance boost, significantly enhancing the responsiveness of on-device generative AI like large language models.

Key Points for the new Arm Cortex-X925 CPU:

Performance: Highest year-on-year performance uplift in Cortex-X history.

Process Node: Advanced 3nm technology.

Clock Rate: 3.8GHz with maximum cache.

Single-Thread Performance: 36% increase compared to 2023 4nm SoCs.

AI Performance: 41% improvement for on-device generative AI tasks.

Arm's press release cites supports from TSMC, Samsung Foundry, and Intel Foundry Services.

https://newsroom.arm.com/news/arm-css-for-client?utm_source=linkedin&utm_medium=social-organic&utm_content=newsroom&utm_campaign=mk04_client_css24

CoreSite expands AWS Direct Connect

CoreSite now offers native access to AWS Direct Connect in its Chicago data center campus, offering dedicated connections from 10 Gbps and 100 Gbps and Hosted connections from 50 Mbps to 25 Gbps. 

AWS Direct Connect is natively available in six CoreSite markets including Los Angeles, Silicon Valley, Denver, New York, Northern Virginia and now Chicago. This new AWS Direct Connect deployment enables CoreSite customers to more easily build secure, high-performing and resilient infrastructure aligned with the AWS Well-Architected Framework.

https://www.coresite.com

Tuesday, May 28, 2024

T-Mobile to acquire UScellular operations for $4.4 billion

T-Mobile agreed to acquire substantially all of UScellular’s wireless operations, including its wireless customers and stores, as well as approximately 30% of its spectrum across certain bands. The combination will add millions of UScellular customers, many of whom are in rural areas, to T-Mobile’s 5G network. Conversely, T-Mobile customers will also get access to UScellular’s network in areas that previously had limited coverage.

Under the deal, T-Mobile will pay approximately $4.4 billion, consisting of cash and up to $2.0 billion of assumed debt. UScellular will retain its other spectrum and towers, while T-Mobile will lease space on 2,100 towers. T-Mobile anticipates $1.0 billion in annual cost synergies and plans to reinvest these savings to enhance consumer options and competition.

  • T-Mobile’s acquisition cost: $4.4 billion (cash and up to $2.0 billion in assumed debt).
  • UScellular retains ownership of its certain spectrum and towers.
  • T-Mobile to lease space on 2,100 towers.
  • No impact on T-Mobile’s 2024 guidance or shareholder return program.
  • Expected annual cost synergies: $1.0 billion.
  • Integration cost: Estimated between $2.2 billion to $2.6 billion.
  • Reinvestment of synergies to improve consumer choice and competition.

“With this deal T-Mobile can extend the superior Un-carrier value and experiences that we’re famous for to millions of UScellular customers and deliver them lower-priced, value-packed plans and better connectivity on our best-in-class nationwide 5G network,” said Mike Sievert, CEO of T-Mobile. “As customers from both companies will get more coverage and more capacity from our combined footprint, our competitors will be forced to keep up – and even more consumers will benefit. The Un-carrier is all about shaking up wireless for the good of consumers and this deal is another way for us to continue doing even more of that.”

https://www.t-mobile.com/news/business/uscellular-acquisition-operations-assets

Cologix completes 50MW data center in Columbus, Ohio

Cologix completed construction of its fourth data center in Columbus Ohio,

Spanning 256,000 square feet across a seven-acre campus, COL4 is equipped to handle the growing influx of AI applications and offers unparalleled connectivity options and fully available power for businesses seeking high-performance data center solutions.

Higlights of Cologix COL4

  • Capacity: 50MW of power across three data halls 
  • Connectivity: COL4 provides access to over 50 unique network providers in Meet-Me-Rooms (MMRs), ensuring diverse connectivity options for all customers.
  • Scalability: The facility offers scalable power options to meet diverse colocation requirements, providing flexibility and efficiency for enterprises of all sizes.
  • AI Readiness: COL4 is designed to support the surge of AI applications, providing the infrastructure needed for seamless integration with cloud services.
  • Redundancy: COL4 features redundant power and cooling systems, as well as robust network infrastructure, to minimize downtime and ensure continuous operation.
  • Security: The facility is equipped with state-of-the-art security measures to safeguard data and infrastructure, ensuring compliance with industry standards and regulations. COL4 has a bullet resistant security booth, closed-circuit television (CCTV) system and biometric scanners and badge access. Cologix has onsite 24/7/365 security personnel. COL4 also has a K-rated perimeter security fence. Customized security is available.
  • Green Data Center Initiatives: COL4 was built using Leadership in Energy and Environmental Design (LEED) principles. 

“Over 10 years ago, we recognized Columbus’ importance as a technology hub for growing companies and decided to enter this market,” said Dawn Smith, President of Cologix. “The completion of COL4 comes at a crucial time with hyperscalers intensifying their focus on cloud regions and laying the groundwork for seamless AI integration with cloud services. This is the first colocation AI-ready data center completed in Columbus and is poised to support this surge in demand. We look forward to continuing to deliver market-leading colocation and interconnection solutions for the Columbus market.”

Together, the four Cologix data centers in the Columbus region span a total of 500,000 square feet and 80MW of power. All four of Cologix’s data centers in Columbus are interconnected with a diverse fiber ring. Additionally, Cologix has Ohio’s most comprehensive carrier hotel in its Columbus data centers as well as an interconnection ecosystem of 50+ unique network and cloud service providers, two public cloud onramps with access to Amazon Web Services Direct Connect and Google Cloud Interconnect and the Ohio IX internet exchange.


xAI raises $6 billion to propel its AI ambitions

xAI has secured $6 billion in Series B funding with contributions from key investors such as Valor Equity Partners, Vy Capital, Andreessen Horowitz, Sequoia Capital, Fidelity Management & Research Company, and Prince Alwaleed Bin Talal’s Kingdom Holding. 

The company says its mission is "to develop advanced AI systems that are truthful, competent, and beneficial for humanity, aiming to understand the true nature of the universe."

Key Investors: Valor Equity Partners, Vy Capital, Andreessen Horowitz, Sequoia Capital, Fidelity Management & Research Company, Prince Alwaleed Bin Talal, Kingdom Holding.

  • xAI was first announced in July 2023.
  • Release of Grok-1 on X in November.
  • Improved Grok-1.5 model with long context capability.
  • Grok-1.5V with image understanding.
  • Open-source release of Grok-1.

https://x.ai/blog/series-b


Astera Labs brings PCIe 6.x testing to Taiwan

Astera Labs announced expanded PCIe 6.x testing capabilities in its Cloud-Scale Interop Lab to enable seamless interoperability between Aries 6 PCIe/CXL Smart DSP Retimers and a broad range of PCIe 6.x hosts and endpoints. This paves the way for AI platform developers to design high-bandwidth, low-latency PCIe 6.x connectivity with confidence, reduce overall development time, and deploy at scale.

Astera Labs has chosen Taiwan to launch its first Cloud-Scale Interop Lab outside of Silicon Valley.

Thad Omura, Chief Business Officer, Astera Labs, said, “As AI systems continue to advance at a rapid pace, data center operators need to deploy increasingly complex systems on an accelerated timeline. Our intense focus on standards compliance and plug-and-play interoperability is foundational to why our widely deployed, field-tested Aries Retimer portfolio sets the gold standard for PCIe/CXL® connectivity. Expanding our Cloud-Scale Interop Lab test suite to support PCIe 6.x operation fast-tracks deployment for customers integrating Aries 6 – the industry’s lowest power PCIe 6.x/CXL 3.x Retimer – with solutions from our ecosystem partners.”

Key Features of PCIe 6.x:

  • Increased Bandwidth: PCIe 6.x offers a significant increase in data transfer rates, reaching up to 64 GT/s (gigatransfers per second) per lane, effectively doubling the bandwidth of PCIe 5.0.
  • Lower Latency: Enhanced encoding and decoding techniques help to reduce latency, improving overall system performance.
  • Improved Power Efficiency: The new standard introduces lower power consumption features, making it more efficient for high-performance computing applications.
  • Backward Compatibility: PCIe 6.x maintains backward compatibility with previous PCIe generations, allowing it to work with older devices and infrastructure.
  • Enhanced Signal Integrity: Uses PAM4 (Pulse Amplitude Modulation with 4 levels) signaling to achieve higher data rates while maintaining signal integrity.

Use Cases for PCIe 6.x:

  • Data Centers and Servers: Ideal for high-speed data processing, storage solutions, and network interface cards, enhancing the performance of cloud computing and big data analytics.
  • Artificial Intelligence and Machine Learning: Supports faster data transfer and processing speeds, essential for training complex AI models and real-time inference tasks.
  • High-Performance Computing (HPC): Suitable for applications requiring extensive computational power and high data throughput, such as scientific simulations, financial modeling, and weather forecasting.
  • Graphics and Gaming: Provides increased bandwidth for graphics cards and other peripherals, enabling higher frame rates and better performance in gaming and professional graphic applications.
  • Networking: Enhances the performance of high-speed networking equipment, including 5G infrastructure and advanced telecommunications systems.