Wednesday, June 10, 2015

Blueprint: Hyper-Converged is Leading the Data Center Revolution

by Sachin Chheda, Senior Director, Product and Technical Marketing, Nutanix

Historically, provisioning and managing infrastructure in a data center has never been straightforward.  There is the need to organize the purchase of the servers; the acquisition of the storage, the choice of the network supplier - and all that before the choice of the software.

All that is changing: the notion that there’s a need to acquire an array of different products in order to manage a data center is fast disappearing. According to a survey from Gartner in 2014, 29% of enterprises are now looking to move to a converged infrastructure.

It used to be easy to deal with additional demands:  more compute power? Just stick an extra server in the rack. Additional storage?  Time for another storage array.  Network running slow?  Looks like it’s time for more bandwidth and so on. In today’s budget conscious times, this is not a worthwhile approach. The modern day business cannot simply throw money at the problem, which is why we’ve seen companies looking to virtualization and server consolidation in an attempt to cut down on their data center footprint.

In fact, the issue was not just about trying to match needs with your infrastructure but also coping with additional demands.  The infrastructure, especially storage had to be over-provisioned to cope with any peaks – and modernizing the infrastructure by opting for more virtualization has not helped this process, in fact it’s introduced even more over-provisioning as the existing storage components do not have the right levels of automation for optimal performance.

There’s a disconnect between the speed with which virtual provisioning works and the time it takes to provision an organization’s storage needs.  Just because a virtual machine can be brought online in a matter of minutes it doesn’t mean that the same follows for other parts of the infrastructure – setting up storage and networking will take large amounts of time. And this is a process that has been made worse by the growing use of fast flash storage, which has made networking effectively much more difficult.

Trying to combine legacy network storage infrastructure with virtualized servers is ensuring bottlenecks somewhere. Essentially, you’ll have a supercharged datacenter in the body of an old jalopy. In this scenario, the old approach of throwing more dollars at your infrastructure therefore is not solving the problem but is actually making things worse as the gap between virtualized servers and legacy hardware grows wider.

What does the CIO do? Welcome to the world of hyper-converged infrastructure. It’s an overloaded phrase, that sounds like it’s been cooked up by some imaginative publicity departments, but this is no over-hyped buzzword.  It’s a technology that provides a leap forward for those companies who want to cope with increasing demand on storage, handle today’s modern application workloads and manage the whole operation seamlessly, without busting the bank.

Most important of all, a hyper-converged system will provide enterprises with type of scalability and flexibility enjoyed by the software based architecture employed by Amazon Web Services (AWS), Google and Facebook.

These companies all faced a common problem: the need to scale quickly and efficiently to meet growing demand. They all tried to construct an architecture using existing technologies, but gave up as the legacy hardware centric kit could not cope with the demands: it was obvious that a new type of datacenter infrastructure was needed - one that did away centralized storage systems, as traditional SANs were one of the major inhibitors.

They did this in several ways; they made extensive use of software based functionality coupled with state of the art and run of the mill hardware and used REST APIs, a technology which helps organizations to cope more effectively with increased demand.  These were used to introduce a greater degree of automation into the process.

The operational efficiency of AWS, Google and Facebook is the bar that IT is now managed against. Web-scale methodologies of software-based distributed architecture are entering the enterprise market in the form of solutions that do not require a team of highly skilled experts to configure and maintain. Enterprises can stop being envious of hyperscale IT and start to look at ways to deploy the technology in their own datacenters.

Rather than construct an elaborate collection of server farms and storage arrays, the hyper-converged approach takes a single state-of-the-art x86 server combined with SSD and HDD storage. It doesn’t sound very complex but the secret sauce is in the software layer of the storage infrastructure. A hyper-converged solution for example, could provide an architecture that runs in conjunction with industry standard hypervisors including ESXi, Hyper-V and KVM. This datacenter infrastructure could deliver modular and linear scalability, better performance and simplified manageability for any virtualized workload at any scale. Another proof point is that, hyper converged solutions can go even farther  and offer integrated enterprise class availability features including VM-centric data protection and disaster recovery – enabling virtualized tier 0 and tier 1 workloads to be run on the same infrastructure supporting initiatives such as VDI and server virtualization.

One of the major initial stumbling blocks of hyper-converged infrastructure is that the initial pre-converged architecture failed to deliver on the promises it made. If you consider the examples where different vendors combined their proprietary offerings to create a converged stack, like Cisco, VMware and EMC  into VBlock and Cisco and Netapp into Flexpod, these products made promises about easier integration, but failed on a hardware level as each of the components was still too proprietary to deliver efficiency when pair with the others. True hyper-converged solutions use commodity hardware, and as discussed above, let the software do the hard work.

For example, the use of the wrong sort of converged architecture does nothing to do away with the need for SANs – and as we saw earlier, the more virtualization that’s introduced, the more requirement for additional storage, which often needed upfront to minimize disruption and re-architecting datacenter infrastructure . And this storage can lead to bottlenecks that reduce the efficiency of the infrastructure. It is therefore far better to opt for the hyper-converged approach that negates the need for upfront investment in a traditional SAN.

Companies are thinking more creatively about how to handle the increased storage workloads brought about by virtualization: there’s a growing need for more compute power, there’s an increasing need to keep costs low and bring greater efficiencies, but there’s no need to follow some of the old methods, nor opt for an implementation based around log-established SAN technology. The world of the data center is changing inexorably and CIOs need to seize the opportunity to bring their systems up to date, in many cases hyper-converged infrastructure will provide the future-proofed solution these companies demand.

About the Author

Sachin Chheda is Senior Director of Product and Technical Marketing at Nutanix.

About Nutanix

Nutanix delivers invisible infrastructure for next-generation enterprise computing, elevating IT to focus on the applications and services that power their business. The company’s software-driven Xtreme Computing Platform natively converges compute, virtualization and storage into a single solution to drive simplicity in the datacenter. Using Nutanix, customers benefit from predictable performance, linear scalability and cloud-like infrastructure consumption.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.