Wednesday, March 22, 2017

Facebook shows its progress with Open Compute Project

The latest instalment of the annual Open Compute Project (OCP) Summit, which was held March 8-9 in Silicon Valley, brought new open source designs for next-generation data centres. It is six years since Facebook launched OCP and it has grown into quite an institution. Membership in the group has doubled over the past year to 195 companies and it is clear that OCP is having an impact in adjacent sectors such as enterprise storage and telecom infrastructure gear.

The OCP was never intended to be a traditional standards organisation, serving more as a public forum in which Facebook, Microsoft and potentially other big buyers of data centre equipment can share their engineering designs with the industry. The hyper-scale cloud market, which also includes Amazon Web Services, Google, Alibaba and potentially others such as IBM and Tencent, are where the growth is at. IDC, in its Worldwide Quarterly Cloud IT Infrastructure Tracker, estimates total spending on IT infrastructure products (server, enterprise storage and Ethernet switches) for deployment in cloud environments will increase by 18% in 2017 to reach $44.2 billion. Of this, IDC estimates that 61% of spending will be by public cloud data centres, while off-premises private cloud environments constitute 15% of spending.

It is clear from previous disclosures that all Facebook data centres have adopted the OCP architecture, including its primary facilities in Prineville (Oregon), Forest City (North Carolina), Altoona (Iowa) and Luleå (Sweden). Meanwhile, the newest Facebook data centres, under construction in Fort Worth (Texas) and Clonee, Ireland are pushing OCP boundaries even further in terms of energy efficiency.

Facebook's ambitions famously extend to connecting all people on the planet and it has already passed the billion monthly user milestone for both its mobile and web platforms. The latest metrics indicate that Facebook is delivering 100 million hours of video content every day to its users; 95+ million photos and videos are shared on Instagram on a daily basis; and 400 million people now use Messenger for voice and video chat on a routine basis.

At this year's OCP Summit, Facebook is rolling out refreshed designs for all of its 'vanity-free' servers, each optimised for a particular workload type, and Facebook engineers can choose to run their applications on any of the supported server types. Highlights of the new designs include:

·         Bryce Canyon, a very high-density storage server for photos and videos that features a 20% higher hard disk drive density and a 4x increase in compute capability over its predecessor, Honey Badger.

·         Yosemite v2, a compute server that features 'hot' service, meaning servers do not need to be powered down when the sled is pulled out of the chassis in order for components to be serviced.

·         Tioga Pass, a compute server with dual-socket motherboards and more IO bandwidth (i.e. more bandwidth to flash, network cards and GPUs) than its predecessor, Leopard, enabling larger memory configurations and faster compute time.

·         Big Basin, a server designed for artificial intelligence (AI) and machine learning, optimised for image processing and training neural networks. Compared to its predecessor, Big Basin can train machine learning models that are 30% larger due to greater arithmetical throughput and by implementing more memory (12 to 16 Gbytes).

Facebook currently has web server capacity to deliver 7.5 quadrillion instructions per second and its 10-year roadmap for data centre infrastructure, also highlighted at the OCP Summit, predicts that AI and machine learning will be applied to a wide range of applications hosted on the Facebook platform. Photos and videos uploaded to any of the Facebook services will routinely go through machine-based image recognition and to handle this load Facebook is pursuing additional OCP designs that bring fast storage capabilities closer to its compute resources. It will leverage silicon photonics to provide fast connectivity between resources inside its hyper-scale data centres and new open source models designed to speed innovation in both hardware and software.