Samsung Electronics Co. introduced a new High Bandwidth Memory (HBM) integrated with artificial intelligence (AI) processing power..
The new processing-in-memory (PIM) architecture is designed to accelerate large-scale processing in data centers, high performance computing (HPC) systems and AI-enabled mobile applications.
Kwangil Park, senior vice president of Memory Product Planning at Samsung Electronics stated, "Our groundbreaking HBM-PIM is the industry's first programmable PIM solution tailored for diverse AI-driven workloads such as HPC, training and inference. We plan to build upon this breakthrough by further collaborating with AI solution providers for even more advanced PIM-powered applications."
Samsung said most of today's computing systems are based on the von Neumann architecture, which uses separate processor and memory units to carry out millions of intricate data processing tasks. This sequential processing approach requires data to constantly move back and forth, resulting in a system-slowing bottleneck especially when handling ever-increasing volumes of data.
Instead, the HBM-PIM brings processing power directly to where the data is stored by placing a DRAM-optimized AI engine inside each memory bank — a storage sub-unit — enabling parallel processing and minimizing data movement. When applied to Samsung's existing HBM2 Aquabolt solution, the new architecture is able to deliver over twice the system performance while reducing energy consumption by more than 70%. The HBM-PIM also does not require any hardware or software changes, allowing faster integration into existing systems.