Sunday, April 23, 2023

CoreWeave raises $221 million for GPU cloud infrastructure

CoreWeave secured $221 million in Series B funding for its cloud infrastructure optimized for large-scale GPU-accelerated workloads. Target workloads include artificial intelligence and machine learning, visual effects and rendering, batch processing and pixel streaming.

CoreWeave said its goal is to offer purpose-built, customized solutions that can outperform larger, more generalized cloud providers. The new capital will also support U.S.-based data center expansion with the opening of two new centers this year, bringing CoreWeave’s total North American-based data centers to five.

“CoreWeave is uniquely positioned to power the seemingly overnight boom in AI technology with our ability to innovate and iterate more quickly than the hyperscalers,” said CoreWeave CEO and co-founder Michael Intrator. “Magnetar’s strong, continued partnership and financial support as lead investor in this Series B round ensures we can maintain that momentum without skipping a beat. Additionally, we’re thrilled to expand our collaboration with the team at NVIDIA. NVIDIA consistently pushes the boundaries of what’s possible in the field of technology, and their vision and guidance will be invaluable as we continue to scale our organization.”

The round was led by Magnetar Capital  with contributions from NVIDIA, and rounded out by Nat Friedman and Daniel Gross.

NVIDIA recently released the highest-performance data center GPU, the NVIDIA H100 Tensor Core, along with the NVIDIA HGX H100 platform. CoreWeave announced at the NVIDIA GTC conference in March that its HGX H100 clusters are live and currently serving clients such as Anlatan, the creators of NovelAI. In addition to HGX H100, CoreWeave offers more than 11 NVIDIA GPU SKUs, interconnected with the NVIDIA Quantum InfiniBand in-network computing platform, which are available to clients on demand and via reserved instance contracts. 

Linux Foundation's DentOS 3.0 targets distributed enterprise edge

The Linux Foundation announced the release of DentOS 3.0, code-named "Cynthia," an open source network operating system utilizing the Linux Kernel, Switchdev, and other Linux based projects, including SwitchDev, POE+ and others.

Specific feature updates in 3.0 include:

  • Traffic Control (TC) Persistence
  • Enables traffic control commands and configuration data across boots simplifies configuration and set up
  • New Kernel 5.15(LTS)
  • Increased security, management with new Kernel software
  • Rapid DevOps, providing early access to:
  • IEEE 802.1x :security/patches in kernel and Switchdev
  • QoS: (Mgmt): enables prioritization and optimization of bandwidth usage in remote locations
  • IPv6: for continued expansion and support of more IOT Devices
  • IGMP Snooping: no router required at enterprise location
  • Egress Policer

DentOS enables Amazon's Just Walk Out Technology to connect and manage thousands of devices like cameras, sensors, entry and exit gates, and access points on the network edge. 

"We are pleased to leverage the DENT open-source platforms to power networking infrastructure to enable customers to skip check-out lines with our Just Walk Out Technology," said Jason Long, head of Networking for Amazon Physical Retail Technology and chairman of the DENT Board. "DENT enabled us to reduce our networking costs by giving us access to open-source switches that allowed Amazon to efficiently deploy new hardware and software whenever we need instead of waiting for a bug fix from a third-party vendor".

"Adoption and deployment by the world's largest e-commerce leader with its Just Walk Out Technology is a shining example of the power of open source," said Arpit Joshipura, general manager, Networking, Edge and IoT, the Linux Foundation. "In just three years, the DENT community created a working platform for disaggregated networks to power multiple device locations at the edge, now used by top retailers to streamline operations. This undertaking is only possible by the power of collaborative open source development."

New Middle Mile: How does Graphiant overcome network inflexibility?

Ali Shaikh, Chief Product Officer from Graphiant, explains how their network architecture delivers a simple network service. Key takeaways:

Graphiant’s stateless core philosophy ensures that the transit is pure transport, fast and flexible, instead of today’s inflexible and rigid topologies

* Graphiant’s architecture puts metadata into the data plane itself, allowing customer traffic to go wherever it needs to without having to worry about how to manage a complicated customer configuration landscape

* Graphiant’s service is built in such a way that it can become an open standard and shared with the industry

Check out the rest of the New Middle Mile (#nmm2023) video showcase here:

Want to be in one of videos? Contact us at