Aviz Networks is working with TensorWave, a start-up building a GPU-as-a-Service. The collaboration is focused on intelligent networks powering AI by implementing RoCE (RDMA over Converged Ethernet)-based AI fabrics to optimize GPU as a service offerings. The implementation features Aviz’s Open Network Enterprise Suite (ONES) multi-vendor SONiC solution.
By deploying Aviz Networks’ technology, TensorWave will enhance its GPU as a service offering by utilizing advanced AMD-based MI300 accelerators to efficiently meet the growing demands of the market, as companies seek to leverage GenAI, LLMs and machine learning for their businesses.
TensorWave has quickly established itself as a leader in providing GPU as a service. The integration of Aviz technology will enable the management and operations of multi-vendor RoCE-based AI fabrics, crucial for handling diverse GPUs, DPUs, and high radix switches. ONES’s unique capabilities include RoCE orchestration, real-time visibility, and threshold-based anomaly detection, making it the only vendor-agnostic AI fabric controller on the market.
The existing feature set of ONES, which includes support for high-density platform configurations, is particularly beneficial for environments managed by TensorWave. The integration of Network Copilot with ONES extends these capabilities further, providing intelligent guidance and automated management functions, simplifying the complex tasks of network configuration and maintenance.
"Our technology is crucial in enabling TensorWave to provide state-of-the-art GPU services, representing a major advancement in our mission to simplify and elevate Networks for AI, and AI for Networks," Vishal Shukla, Founder and CEO of Aviz Networks