Rambus announced that its HBM3 Memory Controller IP is now capable of up to 9.6 Gbps performance. This represents a 50% increase over the HBM3 Gen1 data rate of 6.4 Gbps.
The Rambus HBM3 Memory Controller can enable a total memory throughput of over 1.2 Terabytes per second (TB/s) for training of recommender systems, generative AI and other demanding data center workloads.
“HBM3 is the memory of choice for AI/ML training, with large language models requiring the constant advancement of high-performance memory technologies,” said Neeraj Paliwal, general manager of Silicon IP at Rambus. “Thanks to Rambus innovation and engineering excellence, we’re delivering the industry’s leading-edge performance of 9.6 Gbps in our HBM3 Memory Controller IP.”
“HBM is a crucial memory technology for faster, more efficient processing of large AI training and inferencing sets, such as those used for generative AI,” said Soo-Kyoum Kim, vice president, memory semiconductors at IDC. “It is critical that HBM IP providers like Rambus continually advance performance to enable leading-edge AI accelerators that meet the demanding requirements of the market.”