Facebook has been able to quantify energy efficiency gains of 38% for new servers conforming to the specs of The Open Compute Project, said Matt Corddry, Director of Hardware Engineering at Facebook, speaking at the Open Server Summit in Santa Clara, California. Moreover, the new servers deliver a 24% cost savings compared to generic OEM servers.
The Open Compute Project, which Facebook launched in April 2011, has resulted in vastly simplified Compute Servers, Storage JBODs and an innovative Open Rack System.
But where do we go from here?
Corddry said the long-hanging fruit has now been harvested and we are unlikely to see the double-digit efficiency gains with future design tweaks.
Instead, Corddry said Facebook is now undertaking a fundamental rethinking of its server types. Currently, the company deploys about six different types of standardized servers depending on which applications it is destined to serve. Every service has changing needs and it is standard practice to re-use equipment as application performance needs change.
Looking ahead, Facebook is investing heavily in flash, although traditional disk is still good for cold storage. Corddry said one of Facebook's design principles is to keep storage in close physical proximity to the compute resource because their experience has shown that latency variation becomes an issue when these resources are virtualized and moved across data centers.
The Facebook design mantra is now to disaggregate resources but keep them in the same rack. Keep it rack-local and keep it simple.
To get there, the company is pursuing the concept of standardized "sleds" -- a modular shelf of high-density compute or storage that can be easily plugged into any available space in a rack.
An interesting note is that Facebook typically has 20,000 servers per technician, so it is essential that these sleds can be easily plugged in or slid out for replacement.
As far as the network is concerned, Corddry said 10 GigE server connections are great for now, but looking at the density of flash that can be packed into a 2U sled or the corresponding performance of a 2U high-performance compute sled, it is clear that the connections will need to be faster.
Corddry said Facebook is quite willing to compromise distance for speed. Most of the connections will be across of the rack, meaning that 1.5 meter lengths should be fine. The company has previously discussed a silicon photonics partnership project with Intel. However, Corddry said Facebook is really agnostic about copper vs. fiber.
The overriding design criteria is that all of the assets that go into the rack must be cheap enough to deploy at Facebook's truly massive scale.
http://www.openserversummit.com
http://www.opencompute.org/
Thursday, October 24, 2013
Open Server Summit: Open Compute Hardware - Facebook Future Directions
Thursday, October 24, 2013
Data Centers, Facebook, Open Compute