Habana Labs, a start-up based in Tel-Aviv, Israel, raised $75 million in an oversubscribed series B funding for its development of AI processors.
Habana Labs is currently in production with its first product, a deep learning inference processor, named Goya, that is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. Habana is now offering a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads, such as image recognition, neural machine translation, sentiment analysis, recommender systems, etc. A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. The Goya solution consists of a complete hardware and software stack, including a high-performance graph compiler, hundreds of kernel libraries, and tools.
Habana Labs expects to launch an training processor - codenamed Gaudi - in the second quarter of 2019.
The funding round was led by Intel Capital and joined by WRV Capital, Bessemer Venture Partners, Battery Ventures and others, including existing investors. This brings total funding to $120 million. The company was founded in 2016.
“We are fortunate to have attracted some of the world’s most professional investors, including the world’s leading semiconductor company, Intel,” said David Dahan, Chief Executive Officer of Habana Labs. “The funding will be used to execute on our product roadmap for inference and training solutions, including our next generation 7nm AI processors, to scale our sales and customer support teams, and it only increases our resolve to become the undisputed leader of the nascent AI processor market.”
“Among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor,” said Lip-Bu Tan, Founding Partner of WRV Capital, a leading international venture firm focusing on semiconductors and related hardware, systems, and software. “We are delighted to partner with Intel in backing Habana Labs’ products and its extraordinary team.”
https://habana.ai/
Tuesday, November 12, 2019
Intel announced the commercial production of its Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000).
The new devices are Intel’s first purpose-built ASICs for complex deep learning for cloud and data center customers. Intel said its Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.
Intel also revealed its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.
“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” stated Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.
“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, director, AI System Co-Design at Facebook.