Sunday, November 16, 2014

Mellanox Delivers EDR 100Gb/s InfiniBand and Ethernet Adapter

Mellanox Technologies introduced its ConnectX-4 single/dual-port 100Gb/s Virtual Protocol Interconnect (VPI) adapter.

Mellanox’s ConnectX-4 VPI adapter delivers 10, 20, 25, 40, 50, 56 and 100Gb/s throughput supporting both the InfiniBand and the Ethernet standard protocols.  It can connect any CPU architecture – x86, GPU, POWER, ARM, FPGA, etc, Performance is listed at 150 million messages per second with latency of 0.7usec. The new adapter supports the new RoCE v2 (RDMA) specification, the full variety of  overlay networks technologies - NVGRE (Network Virtualization using GRE), VXLAN  (Virtual Extensible LAN), GENEVE (Generic Network Virtualization Encapsulation), and MPLS (Multi-Protocol Label Switching), and storage offloads such as T10-DIF and RAID offload, etc.  Sampling is expected in Q1 2015.

“Large-scale clusters have incredibly high demands and require extremely low latency and high bandwidth,” said Jorge Vinals, director at Minnesota Supercomputing Institute of the University of Minnesota. “Mellanox’s ConnectX-4 will provide us with the node-to-node communication and real-time data retrieval capabilities we needed to make our EDR InfiniBand cluster the first of its kind in the U.S. With 100Gb/s capabilities, the EDR InfiniBand large-scale cluster will become a critical contribution to research at the University of Minnesota.”

“Cloud infrastructures are becoming a more mainstream way of building compute and storage networks.  More corporations and applications target the vast technological and financial improvements that utilization of the cloud offers,” said Eyal Waldman, president and CEO of Mellanox.

http://www.mellanox.com