by Scott Geng, CTO of Egenera
The past decade has been a time of serious change in the
technology industry with the advent of virtualization and cloud computing.
Virtualization, by itself, has been a significant agent for change in every IT
data center around the world. It’s driven a massive push for server
consolidation that has made IT more efficient with their compute resources by
dramatically driving up utilization rates and fundamentally changing the
processes used for application deployment. It has also driven the development
of many new tools to help administrators get their job done. As an example, tools for live migration provide
more sophisticated ways to deal with hardware changes and load distribution
than was previously possible.
While virtualization was aggressively being adopted in data
centers across the globe, another major agent of change came along on its heels
– cloud computing. Cloud computing, which was really built on top of
virtualization, gives IT even more flexibility in how to deploy applications. It
enables user self-service capabilities that allow traditional IT workflow
processes to be managed by end users or other third parties, and allows those
processes to be fully automated to ease the delivery of services. Cloud technology
has also driven the availability of public services like IaaS, PaaS and SaaS.
These concepts have introduced a new paradigm of service delivery that is
causing many organizations to redesign how they deliver their business value.
Cloud Has Major Impact on Reseller Channels
Virtualization and cloud computing have also resulted in a
massive amount of change across the industry - including the big server vendors,
indirect channels like resellers and VARs, service providers, colos and enterprises.
Let’s start with the big server vendors. Despite
experiencing modest growth in the face of virtualization and cloud trends,
their slice of the pie is getting substantially smaller. Baird Equity Research
Technology estimates that for every dollar spent on Amazon Public Cloud
resources, there is at least $3/$4 dollars not spent on traditional IT. The
reason for this is two fold. First, the massive consolidation effort spawned by
virtualization has reduced the number of servers that IT needs to run their
businesses. Second, the fact that huge companies like Amazon and Google are not
buying their servers from the big hardware vendors, but building the hardware
themselves to control the cost of their infrastructure. It’s hard to get a
definitive count of how many servers that represents but based on gross
estimates for Amazon and Google, which is close to 3 million servers, it is 10%
of the big server vendor’s market share.
VARs and resellers are feeling the pinch even more. If you
are an IBM reseller, for example, you are feeling the impact of the smaller
number of opportunities to sell hardware and add value on top of it. It will
continue to be an uphill battle for these vendors to remain relevant in this
new cloud economy.
The Public Cloud model is also impacting Service Providers.
As businesses move a larger percentage of their services into the cloud, the
traditional service providers are finding it harder to prevent customers from
abandoning ship to the big public cloud vendors – mostly because the price
points from these vendors are so attractive. Take Amazon pricing for example.
They have lowered prices over 40 times since 2008. That constant price pressure
makes it extremely difficult for these businesses to compete. The best path to
success for these vendors is to provide specialized / differentiated services
to avoid the infrastructure price wars that will otherwise crush them.
Enterprises are faced with an independent set of challenges as
they struggle with the fundamental questions of what to move to the cloud and
what is the best way to get there. The easy access to public cloud resources
leaves many IT organizations hard pressed to get their arms around which business
departments are already using these resources. It’s a real security concern
because of the ubiquitous access the cloud enables, as well as the problem of
identifying and controlling costs.
Now, let’s look at its impact on the data center and IT
organizations. Most IT organizations see the power of these new technologies
and are working hard to take advantage of the capabilities that they provide.
However these new capabilities and processes have a price in the form of management
complexity. From a process perspective, the management of virtualized solutions
is an added burden for IT. It’s not like virtualization is completely replacing
all the pre-existing processes used to manage physical servers. That still has
to be done, and it’s not just a matter of hardware deployment, as hardware has
an operational life cycle that IT has to manage too including provisioning, firmware
management, break-fix, next gen and more. So from a practical point of view,
virtualization has added another layer of management complexity to IT’s day-to-day
operations.
The same can be said about cloud computing. While cloud
computing has expanded IT’s toolbox by enabling user self-service and access to
multiple service deployment models, it has added another layer of choice and
management complexity. This is
especially true for organizations starting to adopt hybrid cloud environments
where IT has the challenge of managing multiple disparate
environments that include a mix of hardware vendors, hypervisors and/or
multiple public cloud solutions.
To reduce complexity, IT is sometimes forced to limit those
choices, which is problematic because it locks organizations into solutions
that ultimately limit their ability to adapt to future changes. It is also fair
to say that most cloud solutions today are still relatively immature,
especially with respect to integration, or the lack thereof, into the business
processes of the company. Organizations are left trying to piece together the
integration into their existing processes, which is typically hard to do.
Another important point is that most cloud management
solutions today assume the infrastructure (the hardware platforms, the
hypervisors and the management software) is already deployed and setup. The
services on today’s market don’t help IT deal with moving to a new generation
of hardware or changing hardware vendors entirely. The reality is that there
are very few examples of solutions that integrate the concept of
self-service for the actual physical infrastructure itself or that make it easy
to react to infrastructure changes that happen naturally over time. This leaves
IT with no choice but to support separate processes for setting up and managing
their infrastructure. And that spells complexity.
Given these challenges, can converged infrastructure help
address some of these complexities?
As the name implies, converged infrastructure is the
consolidation/integration of data center resources (compute, network and
storage) into a single solution that can be centrally managed. There are also a
few important related concepts - stateless computing and converged fabrics. As
it turns out, both of these technologies can really help in the fight against
operational complexity. Stateless computing refers to servers that do not store
any unique software configuration or “state” within them when they are powered
off. The value of this approach is that servers become anonymous resources that
can be used to run any operating system, hypervisor and application at any
time. Converged fabric solutions are another example of consolidation but down
at the network/fabric layer - essentially sending network, storage and
management traffic over a single wire. This is important in the drive for
simplification because it reduce the number of physical components. Fewer
components means less things to manage, less things that can fail, lower costs
and better utilization - all things every IT director is striving for.
In my view converged infrastructure with stateless computing
and converged fabrics are ultimately what is needed to address the complexity
of physical, virtual and hybrid clouds. Let’s examine why.
By combining compute, storage and network with converged
infrastructure management you get an integrated solution that provides a single
pane of glass for managing the disparate parts of your infrastructure. This
certainly addresses one of the major pain points with today’s data center. The
complexity of different management interfaces for each subsystem is a serious
headache for IT and having a single user interface to provision and manage all
of these resources is just what the doctor ordered.
The integration of these technologies lends itself to a
simpler environment and a significant increase in automation. The traditional
workflow for deploying a server is complex, because of the manual breaks in the
workflow that naturally happen as IT moves between the various boundaries of
compute, network and storage. These subsystems often require special expertise
to orchestrate the infrastructure. With converged infrastructure solutions,
these complex workflows become simple automated activities that are driven by
software. As always, automation is king in terms of simplifying and
streamlining IT operations.
While a converged infrastructure solution addresses some of
the key pressure points that IT admins experience today, when combined with
stateless computing and a converged fabric it delivers the ultimate in
simplicity, flexibility and automation for IT:
1. Enables
provisioning of bare metal in the same way as provisioning virtual servers.
You can now create your physical server by defining your compute resources,
network interfaces and storage connectivity all via a simple software
interface. This allows IT to get back to a single process for deploying and
managing their infrastructure (regardless of whether its virtual or physical) -
a real impact on IT operations.
2. Higher service
levels. The power of a stateless computing approach enables automated
hardware failover capabilities that drive a radical simplification for
providing highly available services. In fact, it becomes so easy with this
model that IT administrators can make any operating system and application
highly available with the click of a button, and even pool failover resources
between applications – driving incredible efficiency and flexibility.
3. Flexibility that
improves utilization. A converged infrastructure model allows you to take
full advantage of your compute resource by being able to move your compute
power to where you need it most, when you need it. For application developers
it ensures the ability to right size applications both before and after
production.
4. Simplified
Disaster Recovery (DR)
One of the key ingredients to a simplified DR approach is
create a software definition for a physical server. Once you have this model in
place it is easy to copy those definitions to different locations around the
world. Of course, you have to copy the server’s data too. The key benefit here
is that it creates a digital blueprint of data center resources including the
server definitions, network and storage connectivity along with any policies.
In the case of a disaster, the entire environment (servers, network and storage
connectivity) can be reconstituted on a completely different set of physical
resources. This is a powerful enabler for IT to simplify and protect their
business and do it in a way that increases the reliability and effectiveness of
IT.
So, what value does this all bring from the business point
of view?
- Faster delivery of business services
- Better service levels for business services
- Lower capital and operational costs, as well as
reduced software license costs
- Enhanced availability and business continuity insurance
- Flexibility to react to change
I think it’s clear that while virtualization and cloud
computing have brought fantastic benefits to IT, those trends have also caused
serious disruption across the industry. A converged infrastructure approach can
ensure you get the benefits you are striving for, without the headaches and
complexity you always want to avoid.
About the Author
Scott Geng is Chief Technology Officer and Executive Vice President of Engineering at Egenera. Previously, Geng managed the development of leading-edge operating systems and middleware products for Hitachi Computer Products, including development of the operating system for the world’s then-fastest commercial supercomputer and the first industry-compliant Enterprise JavaBeans Server. Geng has also held senior technical positions at IBM, where he served as advisory programmer and team leader in bringing UNIX to the mainframe; The Open Software Foundation, where he served as consulting engineer for the OSF/1 1.3 micro-kernel release; and Wang Laboratories, as principle engineer for the base kernel and member of the architectural review board.
About Egenera
Converge. Unify. Simplify. That’s how Egenera brings confidence to the cloud. The company’s industry leading cloud and data center infrastructure management software, Egenera PAN Cloud Director™ and PAN Manager® software, provide a simple yet powerful way to quickly design, deploy and manage IT services while guaranteeing those cloud services automatically meet the security, performance and availability levels required by the business. Headquartered in Boxborough, Mass., Egenera has thousands of production installations globally, including premier enterprise data centers, service providers and government agencies. For more information on the company, please visit
egenera.com
Got an idea for a Blueprint column? We welcome your ideas on next gen network architecture.
See our guidelines.