Cerebras, a start-up based in Los Altos, California, unveiled its Wafer Scale Engine - a record-setting AI processor boasting a die size of 46,225 square millimeters and containing more than 1.2 trillion transistors. The chip is 56X larger than the largest graphics processing unit and contains 3,000X more on-chip memory.
Key specs
The SLAC cores are flexible, programmable, and optimized for the sparse linear algebra, which underpins neural network computation. The cores are linked together with a fine-grained, all-hardware, on-chip mesh-connected communication network called Swarm.
“Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems. “Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.”
The Cerebras product unveiling occurred at this week's Hot Chips Conference at Stanford University.
http://www.cerebras.net
Key specs
- 400,000 Sparse Linear Algebra (SLA) cores
- 18GB on-chip SRAM, all accessible within a single clock cycle, and provides 9 PB/s memory bandwidth.
- 100 Pb/s interconnect bandwidth in a 2D mesh
- Manufactured by TSMC on its 16nm process technology
The SLAC cores are flexible, programmable, and optimized for the sparse linear algebra, which underpins neural network computation. The cores are linked together with a fine-grained, all-hardware, on-chip mesh-connected communication network called Swarm.
“Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems. “Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.”
The Cerebras product unveiling occurred at this week's Hot Chips Conference at Stanford University.
http://www.cerebras.net