Tuesday, May 21, 2024

Ampere targets 256-core CPU

Ampere Computing has released its annual update, showcasing the company's commitment to sustainable, power-efficient computing for the Cloud and AI sectors. The announcement included a collaboration with Qualcomm Technologies, Inc. to develop a joint AI inferencing solution. This solution will leverage Qualcomm's high-performance, low-power Qualcomm Cloud AI 100 inference solutions alongside Ampere's CPUs, aiming to deliver robust AI inferencing capabilities.

Renee James, CEO of Ampere, emphasized the growing importance of Ampere's silicon design, particularly in addressing the increasing power demands of AI. She highlighted that Ampere's approach has successfully combined low power consumption with high performance, challenging the traditional notion that low power equates to low performance. James pointed out that as AI continues to advance rapidly, the energy consumption of data centers becomes a critical issue, and Ampere's solutions are designed to address this by enhancing efficiency and performance without compromising sustainability.

Jeff Wittich, Chief Product Officer at Ampere, outlined the company's vision for "AI Compute," which integrates traditional cloud-native capabilities with AI. He noted that Ampere CPUs are versatile enough to handle a range of workloads, from data processing and web serving to media delivery and AI applications. Both James and Wittich introduced the upcoming AmpereOne® platform, featuring a 12-channel 256-core CPU built on the N3 process node, which promises to significantly boost performance.

Key Highlights from Ampere's Update:

  • Collaboration with Qualcomm Technologies to develop a joint AI inferencing solution featuring Ampere CPUs and Qualcomm Cloud AI 100 Ultra.
  • Expansion of Ampere's 12-channel platform with a 256-core AmpereOne® CPU, delivering over 40% more performance than current market CPUs without requiring exotic cooling solutions.
  • Meta's Llama 3 is now running on Ampere CPUs at Oracle Cloud, showing comparable performance to an Nvidia A10 GPU with a third of the power consumption.
  • Formation of a UCIe working group as part of the AI Platform Alliance to enhance CPU flexibility through open interface technology.
  • Detailed performance updates on AmpereOne®, which outpaces AMD Genoa by 50% and Bergamo by 15%, offering significant performance per watt advantages.
  • AmpereOne® platform provides up to 34% more performance per rack, aiding data centers in refreshing and consolidating infrastructure to save space, budget, and power.
  • Continued emphasis on sustainable data center infrastructure with solutions for retrofitting existing air-cooled environments and building new, environmentally sustainable data centers.
  • Ampere CPUs capable of running a wide range of workloads, from cloud-native applications to AI, integrating seamlessly with traditional applications.
  • Ampere's ongoing commitment to pioneering the efficiency frontier in computing, delivering high performance within an efficient computing envelope.


Ampere Computing has released its annual update, showcasing the company's commitment to sustainable, power-efficient computing for the Cloud and AI sectors. The announcement included a collaboration with Qualcomm Technologies, Inc. to develop a joint AI inferencing solution. This solution will leverage Qualcomm's high-performance, low-power Qualcomm Cloud AI 100 inference solutions alongside Ampere's CPUs, aiming to deliver robust AI inferencing capabilities.


Renee James, CEO of Ampere, emphasized the growing importance of Ampere's silicon design, particularly in addressing the increasing power demands of AI. She highlighted that Ampere's approach has successfully combined low power consumption with high performance, challenging the traditional notion that low power equates to low performance. James pointed out that as AI continues to advance rapidly, the energy consumption of data centers becomes a critical issue, and Ampere's solutions are designed to address this by enhancing efficiency and performance without compromising sustainability.

Jeff Wittich, Chief Product Officer at Ampere, outlined the company's vision for "AI Compute," which integrates traditional cloud-native capabilities with AI. He noted that Ampere CPUs are versatile enough to handle a range of workloads, from data processing and web serving to media delivery and AI applications. Both James and Wittich introduced the upcoming AmpereOne® platform, featuring a 12-channel 256-core CPU built on the N3 process node, which promises to significantly boost performance.

Key Highlights from Ampere's Update:

  • Collaboration with Qualcomm Technologies to develop a joint AI inferencing solution featuring Ampere CPUs and Qualcomm Cloud AI 100 Ultra.
  • Expansion of Ampere's 12-channel platform with a 256-core AmpereOne® CPU, delivering over 40% more performance than current market CPUs without requiring exotic cooling solutions.
  • Meta's Llama 3 is now running on Ampere CPUs at Oracle Cloud, showing comparable performance to an Nvidia A10 GPU with a third of the power consumption.
  • Formation of a UCIe working group as part of the AI Platform Alliance to enhance CPU flexibility through open interface technology.
  • Detailed performance updates on AmpereOne®, which outpaces AMD Genoa by 50% and Bergamo by 15%, offering significant performance per watt advantages.
  • AmpereOne® platform provides up to 34% more performance per rack, aiding data centers in refreshing and consolidating infrastructure to save space, budget, and power.
  • Continued emphasis on sustainable data center infrastructure with solutions for retrofitting existing air-cooled environments and building new, environmentally sustainable data centers.
  • Ampere CPUs capable of running a wide range of workloads, from cloud-native applications to AI, integrating seamlessly with traditional applications.
  • Ampere's ongoing commitment to pioneering the efficiency frontier in computing, delivering high performance within an efficient computing envelope.


https://youtu.be/pmHnZy7AjSk


false