Wednesday, October 27, 2021

Google collaborates on Intel's ASIC-based infrastructure processor

Intel and Google Cloud announced a deep collaboration to develop an ASIC P4-programmable infrastructure processing unit (IPU).

Code-named “Mount Evans,” this open solution supports open source standards, including an infrastructure programmer development kit (IPDK) to simplify developer access to the technology in Google Cloud data centers. 

Machine learning, large-scale data processing and analytics, media processing, and high-performance computing are all workloads that can benefit from the IPU. Another key design ambition is to separate infrastructure management functions from client workloads, thereby ensuring true bare-metal performance. Google and Intel also agreed to collaborate on open standards for transport and on P4 for programmable packet processing.

Intel said its goal is to enable developers to create an end-to-end programmable network based on Intel Xeon Scalable processors and next-generation Intel Xeon D processors, IPUs, and its new, P4-programmable Intel Tofino intelligent fabric processor (IFP) switch.

AT&T, supported by an established ecosystem of solution providers, will use Intel as a silicon provider for deployment of its forthcoming virtualized radio access network (vRAN), giving it the flexibility to bring automation and cloud-like capabilities into its network, along with optimizations for performance, cost and operational efficiency.

An archived webcast of the Intel Innovation 2021 event is posted.

Discussion of the IPU between Intel CEO Pat Gelsinger and Google Fellow Amin Vahdat begins around 1:37:00 






https://www.intel.com/content/www/us/en/newsroom/news/innovation-event-livestream-replay.html#gs.essztu

Intel rolls FPGA-based Infrastructure Processing Unit (IPU)

Intel outlined its vision for the infrastructure processing unit (IPU), a programmable network device that intelligently manages system-level infrastructure resources by securely accelerating those functions in a data center.

In a video, Guido Appenzeller, chief technology officer with Intel's Data Platforms Group says the idea is to cleanly separate the processing of client workloads from workloads of the cloud service provider.

Intel cites several advantages for an IPU architecture. First, cloud service providers will be able to process their workloads very efficiently on silicon designed for the tasks. The cloud service provider will be able to rent out 100% of available CPU resources to the clients. The clients are now able to fully control the CPUs including with their own hypervisors. In addition, local disk storage can be enhanced or replaced with virtual storage connected to the network. 

The first instances of Intel's FPGA-based IPU platforms are already shipping. Intel will be rolling out additional FPGA-based IPUs and integrated ASICs as well. "As a result of Intel’s collaboration with a majority of hyperscalers, Intel is already the volume leader in the IPU market with our Xeon-D, FPGA and Ethernet components,” said Patty Kummrow, vice president in the Data Platforms Group and general manager of Ethernet Products Group at Intel. “The first of Intel’s FPGA-based IPU platforms are deployed at multiple cloud service providers and our first ASIC IPU is under test.”

https://youtu.be/ahChxDyl8t4

https://www.intel.com/content/www/us/en/newsroom/news/infrastructure-processing-unit-data-center.html