Amazon Web Services announced the availability of a new GPU instance type for Amazon Elastic Compute Cloud (Amazon EC2) that provides up to 16 NVIDIA Tesla K80 GPUs -- the most powerful GPU instances available in the cloud.
The largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.
The new P2 instances are designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering.
“Two years ago, we launched G2 instances to support customers running graphics and compute-intensive applications,” said Matt Garman, Vice President, Amazon EC2. “Today, as customers embrace heavier GPU compute workloads such as artificial intelligence, high-performance computing, and big data processing, they need even higher GPU performance than what was previously available. P2 instances offer seven times the computational capacity for single precision floating point calculations and 60 times more for double precision floating point calculations than the largest G2 instance, providing the best performance for compute-intensive workloads such as financial simulations, energy exploration and scientific computing.”
http://aws.amazon.com/ec2/instance-types/p2/
The largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.
The new P2 instances are designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering.
“Two years ago, we launched G2 instances to support customers running graphics and compute-intensive applications,” said Matt Garman, Vice President, Amazon EC2. “Today, as customers embrace heavier GPU compute workloads such as artificial intelligence, high-performance computing, and big data processing, they need even higher GPU performance than what was previously available. P2 instances offer seven times the computational capacity for single precision floating point calculations and 60 times more for double precision floating point calculations than the largest G2 instance, providing the best performance for compute-intensive workloads such as financial simulations, energy exploration and scientific computing.”
http://aws.amazon.com/ec2/instance-types/p2/