by Stefan Bernbo, founder and CEO of Compuverde
Cisco’s latest Visual Networking Index: Global Mobile Data Traffic Forecast offers just one example of what enterprises are facing on the storage front. The report predicts that global mobile data traffic will grow at a compound annual growth rate of 57 percent from 2014 to 2019. That’s a ten-fold increase in just five years.
How will organizations scale to meet these massive new storage demands? Hardware costs make rapid scaling prohibitive for most businesses, yet a solution is needed quickly. Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands.
Such flexibility can be found in software-defined storage (SDS). Because the storage and compute needs of organizations are varied, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are discussed below – and which resellers should be versed in.
Converged storage is not centrally managed and does not run on hypervisors; the storage is attached directly to the physical servers. Instead, it uses a hardware-based approach comprised of discrete components, each of which can be used on its own for its original purpose in a “building block” model.
In contrast, hyperconverged storage infrastructure is software-defined. All components are converged at the software level and cannot be separated out. This model is centrally managed and virtual machine-based. The storage controller and array are deployed on the same server, and compute and storage are scaled together. Each node has compute and storage capabilities. Data can be stored locally or on another server, depending on how often that data is needed.
Flexibility and agility are increased, and that is exactly what enterprise IT admins need to effectively and efficiently manage today’s data demands. Hyperconverged storage also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more “1:1” scaling would use the hyperconverged approach, and those that deploy VDI environments. The hyperconverged model is storage’s version of a Swiss Army knife; it is useful in many business scenarios. The end result is one building block that works exactly the same; it’s just a question of how many building blocks a data center needs.
Lower total cost of ownership is a major benefit. Commodity off-the-shelf (COTS) servers are typically used in the hyperscale approach, and a data center can have millions of virtual servers without the added expense that this many physical servers would require. Data center managers want to get rid of refrigerator-sized disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale up as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it is less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements—compute and storage—in the same commodity hardware. It becomes a question of how many servers are necessary.
having one box with everything in it; hyperscale has two sets of boxes, one set of storage boxes and one set of compute boxes. It just depends what the architect wants to do, according to the needs of the business. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could be run as a VM – which would make it a hyperconverged configuration.
Perhaps the best news of all, as enterprises scramble to reconfigure current storage architectures, is that data center architects can employ a combination of hyperconverged and hyperscale infrastructures to meet their needs. Enterprises will appreciate the flexibility of these software-defined solutions, as storage needs are sure to change. Savvy resellers will be ready to explain how having this kind of agile infrastructure will help enterprises to future-proof their storage and save money at the same time.
About the Author
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.
Cisco’s latest Visual Networking Index: Global Mobile Data Traffic Forecast offers just one example of what enterprises are facing on the storage front. The report predicts that global mobile data traffic will grow at a compound annual growth rate of 57 percent from 2014 to 2019. That’s a ten-fold increase in just five years.
How will organizations scale to meet these massive new storage demands? Hardware costs make rapid scaling prohibitive for most businesses, yet a solution is needed quickly. Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands.
Such flexibility can be found in software-defined storage (SDS). Because the storage and compute needs of organizations are varied, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are discussed below – and which resellers should be versed in.
Storage Then and Now
Before next-gen storage was “hyper,” it was merely “converged.” Converged storage combines storage and computing hardware to increase delivery time and minimal the physical space required in virtualized and cloud-based environments. This was an improvement over the traditional storage approach, where storage and compute functions were housed in separate hardware. The goal was to improve data storage and retrieval and to speed the delivery of applications to and from clients.Converged storage is not centrally managed and does not run on hypervisors; the storage is attached directly to the physical servers. Instead, it uses a hardware-based approach comprised of discrete components, each of which can be used on its own for its original purpose in a “building block” model.
In contrast, hyperconverged storage infrastructure is software-defined. All components are converged at the software level and cannot be separated out. This model is centrally managed and virtual machine-based. The storage controller and array are deployed on the same server, and compute and storage are scaled together. Each node has compute and storage capabilities. Data can be stored locally or on another server, depending on how often that data is needed.
Flexibility and agility are increased, and that is exactly what enterprise IT admins need to effectively and efficiently manage today’s data demands. Hyperconverged storage also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more “1:1” scaling would use the hyperconverged approach, and those that deploy VDI environments. The hyperconverged model is storage’s version of a Swiss Army knife; it is useful in many business scenarios. The end result is one building block that works exactly the same; it’s just a question of how many building blocks a data center needs.
Start Small, Scale as Needed
The hyperconverged approach seems like just what the storage doctor ordered, but hyperscale is also worth exploring. Hyperscale computing is a distributed computing environment in which the storage controller and array are separated. As its name implies, hyperscale is the ability of an architecture to scale quickly as greater demands are made on the system. This kind of scalability is required in order to build big data or cloud systems; it’s what Internet giants like Amazon and Google use to meet their vast storage demands. However, software-defined storage now enables many enterprises to enjoy the benefits of hyperscale.Lower total cost of ownership is a major benefit. Commodity off-the-shelf (COTS) servers are typically used in the hyperscale approach, and a data center can have millions of virtual servers without the added expense that this many physical servers would require. Data center managers want to get rid of refrigerator-sized disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale up as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it is less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements—compute and storage—in the same commodity hardware. It becomes a question of how many servers are necessary.
The Best of Both Worlds
Here’s an easy way to look at the two approaches. Hyperconverged storage is likehaving one box with everything in it; hyperscale has two sets of boxes, one set of storage boxes and one set of compute boxes. It just depends what the architect wants to do, according to the needs of the business. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could be run as a VM – which would make it a hyperconverged configuration.
Perhaps the best news of all, as enterprises scramble to reconfigure current storage architectures, is that data center architects can employ a combination of hyperconverged and hyperscale infrastructures to meet their needs. Enterprises will appreciate the flexibility of these software-defined solutions, as storage needs are sure to change. Savvy resellers will be ready to explain how having this kind of agile infrastructure will help enterprises to future-proof their storage and save money at the same time.
About the Author
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.