Wednesday, June 10, 2015

Blueprint: Hyper-Converged is Leading the Data Center Revolution

by Sachin Chheda, Senior Director, Product and Technical Marketing, Nutanix

Historically, provisioning and managing infrastructure in a data center has never been straightforward.  There is the need to organize the purchase of the servers; the acquisition of the storage, the choice of the network supplier - and all that before the choice of the software.
All that is changing: the notion that there’s a need to acquire an array of different products in order to manage a data center is fast disappearing. According to a survey from Gartner in 2014, 29% of enterprises are now looking to move to a converged infrastructure.

It used to be easy to deal with additional demands:  more compute power? Just stick an extra server in the rack. Additional storage?  Time for another storage array.  Network running slow?  Looks like it’s time for more bandwidth and so on. In today’s budget conscious times, this is not a worthwhile approach. The modern day business cannot simply throw money at the problem, which is why we’ve seen companies looking to virtualization and server consolidation in an attempt to cut down on their data center footprint.

In fact, the issue was not just about trying to match needs with your infrastructure but also coping with additional demands.  The infrastructure, especially storage had to be over-provisioned to cope with any peaks – and modernizing the infrastructure by opting for more virtualization has not helped this process, in fact it’s introduced even more over-provisioning as the existing storage components do not have the right levels of automation for optimal performance.

There’s a disconnect between the speed with which virtual provisioning works and the time it takes to provision an organization’s storage needs.  Just because a virtual machine can be brought online in a matter of minutes it doesn’t mean that the same follows for other parts of the infrastructure – setting up storage and networking will take large amounts of time. And this is a process that has been made worse by the growing use of fast flash storage, which has made networking effectively much more difficult.

Trying to combine legacy network storage infrastructure with virtualized servers is ensuring bottlenecks somewhere. Essentially, you’ll have a supercharged datacenter in the body of an old jalopy. In this scenario, the old approach of throwing more dollars at your infrastructure therefore is not solving the problem but is actually making things worse as the gap between virtualized servers and legacy hardware grows wider.

What does the CIO do? Welcome to the world of hyper-converged infrastructure. It’s an overloaded phrase, that sounds like it’s been cooked up by some imaginative publicity departments, but this is no over-hyped buzzword.  It’s a technology that provides a leap forward for those companies who want to cope with increasing demand on storage, handle today’s modern application workloads and manage the whole operation seamlessly, without busting the bank.

Most important of all, a hyper-converged system will provide enterprises with type of scalability and flexibility enjoyed by the software based architecture employed by Amazon Web Services (AWS), Google and Facebook.

These companies all faced a common problem: the need to scale quickly and efficiently to meet growing demand. They all tried to construct an architecture using existing technologies, but gave up as the legacy hardware centric kit could not cope with the demands: it was obvious that a new type of datacenter infrastructure was needed - one that did away centralized storage systems, as traditional SANs were one of the major inhibitors.

They did this in several ways; they made extensive use of software based functionality coupled with state of the art and run of the mill hardware and used REST APIs, a technology which helps organizations to cope more effectively with increased demand.  These were used to introduce a greater degree of automation into the process.

The operational efficiency of AWS, Google and Facebook is the bar that IT is now managed against. Web-scale methodologies of software-based distributed architecture are entering the enterprise market in the form of solutions that do not require a team of highly skilled experts to configure and maintain. Enterprises can stop being envious of hyperscale IT and start to look at ways to deploy the technology in their own datacenters.

Rather than construct an elaborate collection of server farms and storage arrays, the hyper-converged approach takes a single state-of-the-art x86 server combined with SSD and HDD storage. It doesn’t sound very complex but the secret sauce is in the software layer of the storage infrastructure. A hyper-converged solution for example, could provide an architecture that runs in conjunction with industry standard hypervisors including ESXi, Hyper-V and KVM. This datacenter infrastructure could deliver modular and linear scalability, better performance and simplified manageability for any virtualized workload at any scale. Another proof point is that, hyper converged solutions can go even farther  and offer integrated enterprise class availability features including VM-centric data protection and disaster recovery – enabling virtualized tier 0 and tier 1 workloads to be run on the same infrastructure supporting initiatives such as VDI and server virtualization.

One of the major initial stumbling blocks of hyper-converged infrastructure is that the initial pre-converged architecture failed to deliver on the promises it made. If you consider the examples where different vendors combined their proprietary offerings to create a converged stack, like Cisco, VMware and EMC  into VBlock and Cisco and Netapp into Flexpod, these products made promises about easier integration, but failed on a hardware level as each of the components was still too proprietary to deliver efficiency when pair with the others. True hyper-converged solutions use commodity hardware, and as discussed above, let the software do the hard work.

For example, the use of the wrong sort of converged architecture does nothing to do away with the need for SANs – and as we saw earlier, the more virtualization that’s introduced, the more requirement for additional storage, which often needed upfront to minimize disruption and re-architecting datacenter infrastructure . And this storage can lead to bottlenecks that reduce the efficiency of the infrastructure. It is therefore far better to opt for the hyper-converged approach that negates the need for upfront investment in a traditional SAN.

Companies are thinking more creatively about how to handle the increased storage workloads brought about by virtualization: there’s a growing need for more compute power, there’s an increasing need to keep costs low and bring greater efficiencies, but there’s no need to follow some of the old methods, nor opt for an implementation based around log-established SAN technology. The world of the data center is changing inexorably and CIOs need to seize the opportunity to bring their systems up to date, in many cases hyper-converged infrastructure will provide the future-proofed solution these companies demand.

About the Author

Sachin Chheda is Senior Director of Product and Technical Marketing at Nutanix.

About Nutanix

Nutanix delivers invisible infrastructure for next-generation enterprise computing, elevating IT to focus on the applications and services that power their business. The company’s software-driven Xtreme Computing Platform natively converges compute, virtualization and storage into a single solution to drive simplicity in the datacenter. Using Nutanix, customers benefit from predictable performance, linear scalability and cloud-like infrastructure consumption.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.


Cisco Expands its Intercloud Ecosystem

Cisco announced major milestones and next steps for its Intercloud initiative -- a globally connected network of clouds -- saying that it now has 65 partners running Intercloud Fabric in over 350 data centers.  The aim of Cisco Intercloud is to reduce complexity out of hybrid clouds.

Some Intercloud highlights:

  • Recent enhancements to Cisco Intercloud are focused in the areas of security, control, and expanded hypervisor support, including OpenStack, KVM and Microsoft Hyper-V.
  • Cisco has signed up 35 independent software vendors (ISVs) to accelerate the creation of innovative cloud services for the Intercloud.
  • At Cisco Live!, 10 Intercloud partners—Cirrity, Datalink, iland, Long View, Peak 10, Presidio, QTS, Quest, Sungard Availability Services and Virtustream—announced new hybrid cloud services built on Cisco Intercloud Fabric. Cisco also announced that KPIT Technologies, Holtzbrinck Publishing Group, and the Salvation Army are using Cisco Intercloud Fabric to implement a single operational model across dev/test, quality assurance and production environments.
  • Cisco is planning to open an Intercloud Marketplace —- a partner-centric global storefront for Intercloud-based applications and cloud services from Cisco and partners. The first wave of new Cisco Intercloud Marketplace application developers and service partners include: ActiveState, Apprenda, Basho, Chef, Citrix, CliQr, Cloud Enabled, CloudBerry Lab, Cloudera, Cloudify, CloudLink, Couchbase, CTERA, Datadog, Davra Networks, desktopsites, Druva, Egnyte, ElasticBox, F5 Networks, Hortonworks, Informatica, MapR, MongoDB, Moonwalk, Nirmata, Panzura, Pegasystems, Platfora, Sanovi, ScaleArc, Skytree, StoAmigo, Talisen Technologies and Zenoss. Intercloud Marketplace launch is targeted for fall of 2015.  
  • Cisco is expanding its participation in leading open source developer communities like Cloud Foundry, OpenShift and Kubernetes. To help developers to build container-based micro-services, Cisco is building an integrated toolset for continuous integration and deployment. By providing developers with access to the latest APIs and micro-services across all Cisco cloud, IoE and big data technologies delivered through the Intercloud, Cisco is extending its focus on embracing and enabling its developer community via its DevNet initiative.
  • Cisco Intercloud Fabric now extends its zone-based firewall services to support Microsoft Azure. Cisco Intercloud Fabric Firewall solves the problem of securing traffic between virtual machines without redirecting this traffic to the edge firewall for lookup by including a zone-based firewall, the Cisco Virtual Security Gateway (VSG). With VSG support in Cisco Intercloud Fabric, customers who use the VSG for the Cisco Nexus® 1000V Series Switch in their enterprise data center can extend the same policies to the VSG instance in the public cloud. This allows them to have consistent firewall policies across their entire hybrid cloud infrastructure.
  • Cisco is now extending its VM onboarding to support Amazon VPC.  Cisco Intercloud Fabric is designed to make the onramp to hybrid cloud easier. Its VM onboarding capabilities allows businesses to easily extend management to VMs already deployed in a public cloud by identifying pre-existing target VMs and moving them under Intercloud Facric control.
  • Cisco Intercloud Fabric now supports OpenStack, KVM and Microsoft Hyper-V. A business' ability to choose the right cloud for the right workload should not be limited by the underlying infrastructure or in this case, the hypervisor. Business requirements do not care which hypervisor a public cloud uses. Operational models are better served if all public clouds are treated the same. With support for VMware vSphere, OpenStack KVM, and Microsoft Hyper-V, Cisco Intercloud Fabric now supports the vast majority of hypervisor deployments.


http://www.cisco.com

Nutanix Builds New Capabilities for its Hyper-Converged Infrastructure

Nutanix unveiled two new product families for hyperconverged data centers -- Nutanix Acropolis and Nutanix Prism -- both aimed at simplifying and scaling infrastructure while enhancing IT service delivery.

Nutanix Acropolis, which builds on the core capabilities of the company’s flagship hyperconverged product, is an open platform for virtualization and application mobility. Nutanix is supporting a choice in application platform, including traditional hypervisors, emerging hypervisors or containers. Nutanix Acropolis is comprised of three foundational components:

  • Distributed Storage Fabric - enables common web-scale services across multiple storage protocols. Acropolis can mount volumes as in-guest iSCSI storage for applications with specific storage protocol requirements such as Microsoft Exchange, unifying all workloads on a single infrastructure. Acropolis also includes a robust implementation of erasure coding storage optimization, which reduces the storage required for data replication by up to 75% compared to traditional mirroring techniques, and can be enabled on previously deployed Nutanix appliances with a simple software upgrade.
  • App Mobility Fabric - a newly-designed open environment capable of delivering virtual machine (VM) placement, VM migration, and VM conversion, as well as cross-hypervisor high availability and integrated disaster recovery. It supports most virtualized applications, and will provide a more seamless path to containers and hybrid cloud computing.
  • Acropolis Hypervisor - while the Distributed Storage Fabric fully supports traditional hypervisors such as VMware vSphere and Microsoft Hyper-V, Acropolis also includes a native hypervisor based on the proven Linux KVM hypervisor. It features enhanced security, self-healing capabilities based on SaltStack and enterprise-grade VM management.

Nutanix Prism, which is an infrastructure management systems, features one-click software upgrades for more efficient maintenance, one-click insight for detailed capacity trend analysis and planning and one-click troubleshooting for rapid issue identification and resolution. Nutanix said its simplified management provides an end-to-end view of all workflows – something difficult to achieve with legacy three-tier, compute, storage and virtualiation architectures. It also features machine learning technology with built-in heuristics and business intelligence to mine large volumes of system data and generate actionable insights for enhancing all aspects of infrastructure performance,

“The most transformative technologies are the ones we don’t even think about. They work all the time, scale on demand and self-heal. In other words, they are invisible,” said Dheeraj Pandey, CEO and founder of Nutanix. “Building on our foundations of web-scale engineering and consumer-grade design, we will make virtualization as invisible as we’ve made storage and elevate enterprise IT expectations yet again.”

http://nutanix.com/invisible-infrastructure

Dell’Oro: Data Center Switching Growth Driven by Big Clouds

The Ethernet Switch – Data Center market grew strongly in the first quarter 2015 to slightly more than $2.0 billion, according to a new report from Dell'Oro Group, thanks to spending from big cloud operators such as Amazon, Apple, Facebook, Google, and Microsoft.

The Quarterly Ethernet Switch – Data Center Report also indicates that Cloud Customers will begin migrating to 25 and 100 GE at the end of 2015.

“Strong adoption of 40 GE amongst large US Cloud customers drove strong year-over-year growth in the Ethernet Switch – Data Center market,” said Alan Weckel, Vice President of Ethernet Switch – Data Center market research at Dell’Oro Group.  “When Dell’Oro Group looks at the overall data center market, many trends are similar between Servers, Storage, and Networking, such as the movement by Cloud customers to white box / bare metal solutions.  We are often able to leverage our holistic coverage to forecast trends early and accurately before they become widespread, such as the upcoming market migration to 25 GE server access, or the recent movement in large US Cloud customers to 40 GE,” stated Weckel.

http://www.delloro.com

IHS: Data Center and Enterprise SDN Markets to See Strong Growth

The software-defined networking (SDN) market (Ethernet switches and controllers) will reach $13 billion in 2019, up from $781 million in 2014, as the availability of branded bare metal switches and use of SDN by enterprises and smaller cloud service providers (CSPs) drive growth, according to a newly published IHS Infonetics' biannual Data Center and Enterprise SDN Hardware and Software report.

Some highlights:

  • SDN still spells opportunity for existing and new vendors; the leaders in the SDN market serving the enterprise data center will be solidified during the next 2 years as 2015 lab trials give way to live production deployments
  • Bare metal switches are the top in-use for SDN-capable switch use case
  • SDN network virtualization overlays (NVOs) will go mainstream in 2016

“The SDN market is still forming, and the top market share slots will change hands frequently, but currently the segment leaders are Dell, HP, VMware and White Box,” said Cliff Grossner, Ph.D., research director for data center, cloud and SDN at IHS.

http://www.infonetics.com

IHS: Microwave Equipment Sales Decline in Q1

The global microwave equipment market totaled $1.2 billion in the first quarter of 2015 (1Q15), a 9 percent sequential decline as operators continued to be cautious with spending, according to the IHS Infonetics Microwave Equipment report.

Some Highlights:

  • Microwave equipment revenue has been trending downward for the last few years due to pricing pressure and competition from fiber-based backhaul solutions
  • Backhaul comprised 85 percent of microwave equipment revenue in 1Q15, while transport made up 8 percent and access 7 percent
  • From a technology perspective, the dual Ethernet/TDM and all-Ethernet segments dominated microwave sales in Q1
  • EMEA and Asia Pacific again led in microwave equipment revenue, together combining for 81 percent of global share
  • Most microwave gear manufacturers reported decreased quarter-over-quarter results in 1Q15

“While some seasonality in the first quarter is typical, the microwave market was a little softer than usual in the first quarter as operators kept a tight rein on capex. On the plus side, the market did grow 15 percent from the year-ago first quarter,” said Richard Webb, research director for mobile backhaul and small cells at IHS. “In the long term, new shoots of growth from LTE/LTE-A backhaul upgrades and small cell deployments will give the market a boost, driving it back to revenue growth beginning in 2017."

http://www.infonetics.com/pr/2015/1Q15-Microwave-Equipment-Market-Highlights.asp

Samsung and Telefónica Collaborate on IoT

Samsung Electronics Iberia and Telefónica I+D have formed a collaborative partnership focused on IoT.

The companies are working to integrate Telefónica Thinking Things ecosystem with the advanced capabilities of Samsung’s devices.  Two prototypes are already under development. The first one integrates Telefonica’s Thinking Things Modular solution with the capabilities of Samsung’s devices and sensor technology in order to identify new products and services. The second prototype involves the development of a physical button that will be used to simplify the application of IoT capabilities to different environments.

“This initiative with Telefónica allows us to explore the countless possibilities offered by Samsung’s industry leading technology in order to create innovative solutions and make them available for our consumers”,  said Alfredo Aragüés, Samsung Spain's R&D division lead. “At Samsung, we believe that the true value of IoT technology is making the user experience easier and more intuitive. This is why we regard Telefónica's Thinking Things platform as an excellent environment for the development of revolutionary initiatives in the IoT field that will be able to bring these opportunities to life in the present day.”

http://saladeprensa.telefonica.com/