Intel and Microsoft have contributed a Scalable I/O Virtualization (SIOV) specification to the Open Compute Project (OCP), enabling virtualization of PCI Express and Compute Express Link devices in cloud servers.
SIOV is hardware-assisted I/O virtualization with the potential to support thousands of virtualized workloads per server. SIOV moves the non-performance-critical virtualization and management logic off the PCIe device and into the virtualization stack. It also uses a new scalable identifier on the device, called the PCIe Process Address Space ID, to address the workloads’ memory.
SIOV technology is supported in the upcoming Intel Xeon® Scalable processor, code-named Sapphire Rapids, as well as Intel Ethernet 800-series network controllers and future PCIe and Compute Express Link (CXL) devices and accelerators. Linux kernel upstreaming is underway with anticipated integration later in 2022.
“Microsoft has long collaborated with silicon partners on standards as system architecture and ecosystems evolve. The Scalable I/O Virtualization specification represents the latest of our hardware open standards contributions together with Intel, such as PCI Express, Compute Express Link and UEFI,” said Zaid Kahn, GM for Cloud and AI Advanced Architectures at Microsoft. “Through this collaboration with Intel and OCP, we hope to promote wide adoption of SIOV among silicon vendors, device vendors, and IP providers, and we welcome the opportunity to collaborate more broadly across the ecosystem to evolve this standard as cloud infrastructure requirements grow and change.”
The first I/O virtualization specification, Single-Root I/O Virtualization (SR-IOV), was released more than a decade ago and conceived for the virtualized environments of that era, generally fewer than 20 virtualized workloads per server.