Intel outlined its vision for the infrastructure processing unit (IPU), a programmable network device that intelligently manages system-level infrastructure resources by securely accelerating those functions in a data center.
In a video, Guido Appenzeller, chief technology officer with Intel's Data Platforms Group says the idea is to cleanly separate the processing of client workloads from workloads of the cloud service provider.
Intel cites several advantages for an IPU architecture. First, cloud service providers will be able to process their workloads very efficiently on silicon designed for the tasks. The cloud service provider will be able to rent out 100% of available CPU resources to the clients. The clients are now able to fully control the CPUs including with their own hypervisors. In addition, local disk storage can be enhanced or replaced with virtual storage connected to the network. The first instances of Intel's FPGA-based IPU platforms are already shipping. Intel will be rolling out additional FPGA-based IPUs and integrated ASICs as well. "As a result of Intel’s collaboration with a majority of hyperscalers, Intel is already the volume leader in the IPU market with our Xeon-D, FPGA and Ethernet components,” said Patty Kummrow, vice president in the Data Platforms Group and general manager of Ethernet Products Group at Intel. “The first of Intel’s FPGA-based IPU platforms are deployed at multiple cloud service providers and our first ASIC IPU is under test.”