Intel has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center, for approximately $2 billion.
Habana’s Gaudi AI Training Processor is currently sampling with select hyperscale customers. Large-node training systems based on Gaudi are expected to deliver up to a 4x increase in throughput versus systems built with the equivalent number of GPUs. Gaudi is designed for efficient and flexible system scale-up and scale-out.
Additionally, Habana’s Goya AI Inference Processor, which is commercially available, has demonstrated excellent inference performance including throughput and real-time latency in a highly competitive power envelope. Gaudi for training and Goya for inference offer a rich, easy-to-program development environment to help customers deploy and differentiate their solutions as AI workloads continue to evolve with growing demands on compute, memory and connectivity.
Habana will remain an independent business unit and will continue to be led by its current management team. Habana will report to Intel’s Data Platforms Group, home to Intel’s broad portfolio of data center class AI technologies.
“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”
Habana Labs chairman Avigdor Willenz will serve as a senior adviser to the business unit as well as to Intel Corporation after Intel’s purchase of Habana.
“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”
Habana’s Gaudi AI Training Processor is currently sampling with select hyperscale customers. Large-node training systems based on Gaudi are expected to deliver up to a 4x increase in throughput versus systems built with the equivalent number of GPUs. Gaudi is designed for efficient and flexible system scale-up and scale-out.
Additionally, Habana’s Goya AI Inference Processor, which is commercially available, has demonstrated excellent inference performance including throughput and real-time latency in a highly competitive power envelope. Gaudi for training and Goya for inference offer a rich, easy-to-program development environment to help customers deploy and differentiate their solutions as AI workloads continue to evolve with growing demands on compute, memory and connectivity.
Habana will remain an independent business unit and will continue to be led by its current management team. Habana will report to Intel’s Data Platforms Group, home to Intel’s broad portfolio of data center class AI technologies.
“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”
Habana Labs chairman Avigdor Willenz will serve as a senior adviser to the business unit as well as to Intel Corporation after Intel’s purchase of Habana.
“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”
Interview: Habana Labs targets AI processors
Habana Labs, a start-up based in Israel with offices in Silicon Valley, emerged from stealth to unveil its first AI processor. Habana's deep learning inference processor, named Goya, is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. The company will offer a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads,...
Habana Labs, a start-up based in Tel-Aviv, Israel, raised $75 million in an oversubscribed series B funding for its development of AI processors.
Habana Labs is currently in production with its first product, a deep learning inference processor, named Goya, that is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. Habana is now offering a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads, such as image recognition, neural machine translation, sentiment analysis, recommender systems, etc. A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. The Goya solution consists of a complete hardware and software stack, including a high-performance graph compiler, hundreds of kernel libraries, and tools.
Habana Labs expects to launch an training processor - codenamed Gaudi - in the second quarter of 2019.
The funding round was led by Intel Capital and joined by WRV Capital, Bessemer Venture Partners, Battery Ventures and others, including existing investors. This brings total funding to $120 million. The company was founded in 2016.
“We are fortunate to have attracted some of the world’s most professional investors, including the world’s leading semiconductor company, Intel,” said David Dahan, Chief Executive Officer of Habana Labs. “The funding will be used to execute on our product roadmap for inference and training solutions, including our next generation 7nm AI processors, to scale our sales and customer support teams, and it only increases our resolve to become the undisputed leader of the nascent AI processor market.”
“Among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor,” said Lip-Bu Tan, Founding Partner of WRV Capital, a leading international venture firm focusing on semiconductors and related hardware, systems, and software. “We are delighted to partner with Intel in backing Habana Labs’ products and its extraordinary team.”
https://habana.ai/
Habana Labs is currently in production with its first product, a deep learning inference processor, named Goya, that is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. Habana is now offering a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads, such as image recognition, neural machine translation, sentiment analysis, recommender systems, etc. A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. The Goya solution consists of a complete hardware and software stack, including a high-performance graph compiler, hundreds of kernel libraries, and tools.
Habana Labs expects to launch an training processor - codenamed Gaudi - in the second quarter of 2019.
The funding round was led by Intel Capital and joined by WRV Capital, Bessemer Venture Partners, Battery Ventures and others, including existing investors. This brings total funding to $120 million. The company was founded in 2016.
“We are fortunate to have attracted some of the world’s most professional investors, including the world’s leading semiconductor company, Intel,” said David Dahan, Chief Executive Officer of Habana Labs. “The funding will be used to execute on our product roadmap for inference and training solutions, including our next generation 7nm AI processors, to scale our sales and customer support teams, and it only increases our resolve to become the undisputed leader of the nascent AI processor market.”
“Among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor,” said Lip-Bu Tan, Founding Partner of WRV Capital, a leading international venture firm focusing on semiconductors and related hardware, systems, and software. “We are delighted to partner with Intel in backing Habana Labs’ products and its extraordinary team.”
https://habana.ai/
Intel ships its Nervana Neural Network Processors
Intel announced the commercial production of its Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000).
The new devices are Intel’s first purpose-built ASICs for complex deep learning for cloud and data center customers. Intel said its Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.
Intel also revealed its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.
“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” stated Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.
“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, director, AI System Co-Design at Facebook.
The new devices are Intel’s first purpose-built ASICs for complex deep learning for cloud and data center customers. Intel said its Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.
Intel also revealed its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.
“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” stated Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.
“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, director, AI System Co-Design at Facebook.