Tuesday, August 11, 2015

HGST Announces Persistent Memory Fabric Technology

HGST has developed a persistent memory fabric technology that promises low-power, DRAM-like performance, and does not require BIOS modification nor rewriting of applications. Memory mapping of remote PCM using the Remote Direct Memory Access (RDMA) protocol over networking infrastructures, such as Ethernet or InfiniBand, enables a seamless, wide scale deployment of in-memory computing. This network-based approach allows applications to harness the non-volatile PCM across multiple computers to scale out as needed.

At this week's Flash Memory Summit in Santa Clara, California, HGST, in collaboration with Mellanox Technologies, its showcasing the PCM-based, RDMA-enabled in-memory compute cluster architecture. The HGST/Mellanox demonstration achieves random access latency of less than two microseconds for 512 B reads, and throughput exceeding 3.5 GB/s for two KB block sizes using RDMA over InfiniBand.

"DRAM is expensive and consumes significant power, but today's alternatives lack sufficient density and are too slow to be a viable replacement," said Steve Campbell, HGST's chief technology officer. "Last year our Research arm demonstrated Phase Change Memory as a viable DRAM performance alternative at a new price and capacity tier bridging main memory and persistent storage.  To scale out this level of performance across the data center requires further innovation.  Our work with Mellanox proves that non-volatile main memory can be mapped across a network with latencies that fit inside the performance envelope of in-memory compute applications."

"Mellanox is excited to be working with HGST to drive persistent memory fabrics," said Kevin Deierling, vice president of marketing at Mellanox Technologies.  "To truly shake up the economics of the in-memory compute ecosystem will require a combination of networking and storage working together transparently to minimize latency and maximize scalability.  With this demonstration, we were able to leverage RDMA over InfiniBand to achieve record-breaking round-trip latencies under two microseconds.  In the future, our goal is to support PCM access using both InfiniBand and RDMA over Converged Ethernet (RoCE) to increase the scalability and lower the cost of in-memory applications."

http://www.hgst.com