Intel announced the commercial production of its Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000).
The new devices are Intel’s first purpose-built ASICs for complex deep learning for cloud and data center customers. Intel said its Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.
Intel also revealed its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.
“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” stated Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.
“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, director, AI System Co-Design at Facebook.
The new devices are Intel’s first purpose-built ASICs for complex deep learning for cloud and data center customers. Intel said its Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.
Intel also revealed its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.
“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” stated Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.
“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, director, AI System Co-Design at Facebook.