Wednesday, August 14, 2019

Blueprint: Turn Your Data Center into an Elastic Bare-Metal Cloud

by Denise Shiffman is Chief Product Officer for DriveScale.

What if you could create an automated, elastic, cloud-like experience in your own data center for a fraction of the cost of the public cloud? Today, high performance, data-oriented and containerized applications are commonly deployed on bare-metal which is keeping them on premises. But the hardware deployed is static, costing IT in overprovisioned, underutilized, siloed clusters.

Throughout the evolution of data center IT infrastructure, one thing has remained constant. Once deployed, compute, storage and networking systems remain fixed and inflexible. The move to virtual machines better utilized the resources on the host system they were tied to, but virtual machines didn’t make data center hardware more dynamic or adaptable.

In the era of advanced analytics, machine learning and cloud-native applications, IT needs to find ways to quickly adapt to new workloads and ever-growing data. This has many people talking about software-defined solutions. When software is pulled out of proprietary hardware, whether it’s compute, storage or networking hardware, then flexibility is increased, and costs are reduced. With next-generation, composable infrastructure, software-defined takes on new meaning. For the first time, IT can create and recreate logical hardware through software, making the hardware infrastructure fully programmable. And the benefits are enormous.

Composable Infrastructure can also support the move to more flexible and speedy deployments through DevOps with an automated and dynamic solution integrated with Kubernetes and containers. When deploying data-intensive, scale-out workloads, IT now has the opportunity to shift compute and storage infrastructures away from static, fixed resources. Modern database and application deployments require modern infrastructure driving the emergence of Composable Infrastructure – and it promises to address the exact problems that traditional data centers cannot. In fact, for the first time, using Composable Infrastructure, any data center can become an elastic bare-metal cloud. But what exactly is Composable Infrastructure and how do you implement it?

Elastic and Fully-Automated Infrastructure

Composable Infrastructure begins with disaggregating compute nodes from storage, essentially moving the drives to simple storage systems on a standard Ethernet network. Through a REST API, GUI or template, users choose the instances of compute and the instances of storage required by an application or workload and the cluster of resources is created on the fly ready for application deployment. Similar to the way users chooses instances in the public cloud and the cloud provider stitches that solution together, composable provides the ability to flexibly create, adapt, deploy and redeploy compute and storage resources instantly using pools of heterogeneous, commodity compute, storage and network fabric.

Composable gives you cloud agility and scale, and fundamentally different economics.
  • Eliminate Wasted Spend: With local storage inside the server, fixed configurations of compute and storage resources end up trapped inside the box and left unused. Composable Infrastructure enables the ability to independently scale processing and storage and make adjustments to deployments on the fly. Composable eliminates overprovisioning and stranded resources and enables the acquisition of lower cost hardware.
  • Low Cost, Automated Infrastructure: Providing automated infrastructure on premises, composable enables the flexibility and agility of cloud architectures, and creates independent lifecycles for compute and storage lowering costs and eliminating the noisy neighbors problem in the cloud.
  • Performance and Scale: With today’s high-speed standard Ethernet networks, Composable provides equivalent performance to local drives, while eliminating the need for specialized storage networks. Critical too, composable solutions can scale seamlessly to thousands of nodes while maintaining high performance and high availability.

The Local Storage Conundrum

Drive technology continues to advance with larger drives and with NVMe™ flash. Trapping these drives inside a server limits the ability to gain full utilization of these valuable resources. With machine learning and advanced analytics, storage needs to be shared with an ever-larger number of servers and users need to be able to expand and contract capacity on demand. Composable NVMe puts NVMe on a fabric whether that’s a TCP, RDMA or iSCSI fabric (often referred to as NVMe over fabrics), and user’s gain significant advantages:

  • Elastic storage: By disaggregating compute and storage, NVMe drives or slices of drives can be attached to almost any number of servers. The amount of storage can be expanded or reduced on demand. And a single building block vendor SKU can be used across a wide variety of configurations and use cases eliminating operational complexity. 
  • Increased storage utilization:  Historically, flash utilization has been a significant concern. Composable NVMe over fabrics enables the ability to gain full utilization of the drives and the storage system. Resources from storage systems are allocated to servers in a simple and fully-automated way – and very high IOPS and low-latency comparable to local drives is maintained. 

The Elastic Bare Metal Cloud Data Center

Deploying Kubernetes containerized applications bare metal with Composable Infrastructure enables optimized resource utilization and application, data and hardware availability. The combination of Kubernetes with programmable bare-metal resources turns any data center into a cloud.

Composable data centers eradicate static infrastructure and impose a model where hardware is redefined as a flexible, adaptable set of resources composed and re-composed at will as applications require – making infrastructure as code a reality. Hardware elasticity and cost-efficiencies can be achieved by using disaggregated, heterogeneous building blocks, requiring just a single diskless server SKU and a single eBOD (Ethernet-attached Box of Drives) SKU or JBOD (Just a Box of Drives) SKU to create an enormous array of logical server designs. Failed drives or compute nodes can be replaced through software, and compute and storage are scaled or upgraded independently. And with the ability to quickly and easily determine optimal resource requirements and adapt ratios of resources for deployed applications, composable data centers won’t leave resources stranded or underutilized.

Getting Started with Composable Infrastructure

Composable Infrastructure is built to meet the scale, performance and high availability demands of data-intensive and cloud-native applications while dramatically lowering the cost of deployment. Moving from static to fluid infrastructure may sound like a big jump, but composable doesn’t require a forklift upgrade. Composable Infrastructure can be easily added to a current cluster and used for the expansion of that cluster. It’s a seamless way to get started and to see cost-savings on day one.

Deploying applications in a composable data center will make it easier for IT to meet the needs of the business, while increasing speed to deployment and lowering infrastructure costs. Once you experience the power and control provided by Composable Infrastructure, you’ll wonder how you ever lived without it.

About DriveScale  
DriveScale instantly turns any data center into an elastic bare-metal cloud with on-demand instances of compute, GPU and storage, including native NVMe over Fabrics, to deliver the exact resources a workload needs, and to expand, reduce or replace resources on the fly. With DriveScale, high-performance bare-metal or Kubernetes clusters deploy in seconds for machine learning, advanced analytics and cloud-native applications at a fraction of the cost of the public cloud. www.drivescale.com

Cisco posts Q4 revenue of $13.4 billion, up 6%

Cisco reported fourth-quarter revenue of $13.4 billion, up 6% over the same period last year. Net income (GAAP) amounted to $2.2 billion or $0.51 per share. Non-GAAP net income was $3.6 billion or $0.83 per share.

For full FY19, Cisco reported total revenue of $51.7 billion, an increase of 7%. Net Income and EPS -- On a GAAP basis, net income was $11.6 billion and EPS was $2.61. On a non-GAAP basis, net income was $13.8 billion, up 9% compared to fiscal 2018, and EPS was $3.10, an increase of 20%

"Our Q4 results marked a strong end to a great year. We are executing well in a dynamic environment, delivering tremendous innovation across our portfolio and extending our market leadership," said Chuck Robbins, chairman and CEO of Cisco. "We are committed to providing our customers ongoing value through differentiated solutions, and we are well positioned to take advantage of
the long-term growth opportunities ahead.

Some highlights:

  • Product revenue was up 7% and service revenue up 4%. 
  • Revenue by geographic segment was: Americas up 9%, EMEA up 7%, and APJC down 4%. 
  • Product revenue performance was broad based with growth in
  • Security, up 14%, Applications, up 11%, and Infrastructure Platforms, up 6%.
  • Gross Margin -- On a GAAP basis, total gross margin, product gross margin, and service gross margin were 63.9%, 62.9%, and 66.8%, respectively, as compared with 61.7%, 60.2%, and 66.0%, respectively, in the fourth quarter of 2018.


OIF launches higher baud rate coherent driver modulator project

The OIF has begun a new project to develop a higher baud rate coherent driver modulator.

The “Higher Baud Rate Coherent Driver Modulator” project will define a new version of the Coherent Driver Modulator supporting at least 96 Gbaud for the low modem implementation penalty segment of the coherent market for single optical carrier line rates beyond 400 Gbit/s. Designed for higher data rates and longer reach and optimized for performance, this project is the next generation of the High Bandwidth Coherent Driver Modulator (HB-CDM) Implementation Agreement (IA) published last year.

Following this year’s OIF Q319 Technical and MA&E Committees Meeting in Montreal, OIF also launched work on a white paper detailing low-rate service multiplexing using FlexE and 400ZR. The whitepaper seeks to eliminate ambiguity and provide clarification on how 400ZR should be leveraged in multiplexing applications. Various network operators are looking for a multiplexing scheme to support lower-rate Ethernet clients (e.g. 4x100GE) into a 400ZR coherent line. This technical white paper will educate the market on how FlexE can be used to aggregate low-rate Ethernet services (e.g. 4x100GE) into 400ZR interfaces.

Andrew Schmitt, founder and directing analyst, Cignal AI gave member attendees a brief overview of emerging pluggable coherent technologies and the opportunity this new market presents and had the opportunity to speak with members about current and upcoming OIF work.

“It’s clear that OIF is not resting after a successful effort to standardize 400ZR, proven by the launch of two new projects at the recent Q3 meeting,” said Schmitt. “Also, as interest in pluggable coherent solutions grows, it is good to see OIF soliciting feedback from additional network operators in order to shape requirements for next generation standards.”

https://www.oiforum.com/

ThousandEyes adds support for Alibaba Cloud to its global monitoring

ThousandEyes is boosting its Asia-Pacific monitoring capabilities with support for Alibaba Cloud. Specifically, ThousandEyes added 19 Alibaba Cloud regions worldwide, plus 13 new Cloud Agent locations across Asia-Pacific, including four new locations in India, bringing ThousandEyes Asia-Pacific vantage points to a total of 53 cities and global vantage points to a total of more than 180 cities. This latest expansion adds to ThousandEyes' existing Cloud Agent locations in IaaS providers, which currently includes 15 AWS regions, 15 GCP regions and 25 Azure regions.

"Global organizations today run on the Internet, connecting applications and services to end-users everywhere, and making deep Internet visibility non-negotiable, which is especially relevant for companies operating in Asia-Pacific where heavy sovereign controls impact Internet performance and digital experience," said ThousandEyes vice president of product Joe Vaccaro."

https://www.thousandeyes.com/press-releases/expanding-global-multi-cloud-monitoring-alibaba-cloud

China Unicom's revenue dips as it prepares for 5G rollout

China Unicom reported service revenue of RMB 132.957 billion for the first half of 2019, down 1.1% from RMB 134,423 million for the same period in 2018. Net profits increased by 16.3% to RMB 6.88 billion. Operating revenue amounted to RMB 144.954 billion, down -2.8% yoy.

Mobile service revenue dipped 6.6% compared to last year, despite the company adding 9.32 million subscribers during the first half of the year.

Industry Internet Revenue for 1H19 amounted to RMB 16.72 billion, up by 43% compared to the first half of 2018.

Regarding its upcoming 5G rollout, China Unicom said it is pursuing a “co-build co-share" strategy to lower CAPEX requirements, tower usage fees, network maintenance expenses & power charges.

https://www.chinaunicom.com.hk/en/ir/presentations.php






Ethernet Alliance Plugfest achieves pass rates > 97% on #25G

A plugfest conducted by the Ethernet Alliance achieved pass rates of > 97% for 25G connectivity.

The third in its ongoing series, the Ethernet Alliance HSN Plugfest drew participation from 13 diverse member companies representing all aspects of the Ethernet ecosystem. Addressing the need for enabling emerging technologies, products and solutions spanning speeds of 25 Gigabit Ethernet (GbE) up to 400GbE were tested in various form factors such as OSFP, QSFP, and QSFP-DD. Equipment undergoing testing included both electrical and optical interconnects; new signaling and modulation technologies; switches and NICs; cabling; and test and measurement solutions and methodologies.

The High Speed Networking (HSN) Plugfest, which was conducted in late April at the University of New Hampshire InterOperability Laboratory (UNH-IOL) showed consistent improvement over previous events, with Frame Error Rate (FER) tests producing a remarkable 100 percent pass rate, and functional interoperability tests achieving an aggregated 97.5 percent pass rate.

“This latest Ethernet Alliance plugfest was a valuable opportunity for testing of both pre-release and market-ready products and solutions against IEEE standards in a confidential, non-competitive environment. The substantial turnout among member companies and high volume of successful tests speaks to Ethernet’s enduring legacy of continuous improvement,” said Dave Chalupsky, plugfest chair and Board of Directors member, Ethernet Alliance; and network product architect, Intel Corporation. “Ethernet’s hallmark multivendor interoperability makes it ideal for addressing global demand for higher-speed connectivity. Test events like this are the key to unleashing that interoperability, so we’re definitely looking forward to our next HSN Plugfest in October 2019.”

Among companies taking part were Amphenol, Anritsu, Arista Networks, Credo Semiconductor, EXFO, Fluke, HG Genuine Co., Intel, The Siemon Company, Spirent, Tektronix, Teledyne LeCroy, and Wilder Technologies.

The next HSN Plugfest, open exclusively to Ethernet Alliance members, is scheduled for October 2019 at UNH-IOL.

FCC Chairman urges approval of T-Mobile + Sprint

FCC Chairman Ajit Pai circulated a draft order with his fellow commissioners urging approval of the T-Mobile + Sprint deal subject to conditions imposed by the Department of Justice.

“After one of the most exhaustive merger reviews in Commission history, the evidence conclusively demonstrates that this transaction will bring fast 5G wireless service to many more Americans and help close the digital divide in rural areas. Moreover, with the conditions included in this draft Order, the merger will promote robust competition in mobile broadband, put critical mid-band spectrum to use, and bring new competition to the fixed broadband market.” said Chairman Pai. “I thank our transaction team for the thorough and careful analysis reflected in this draft Order and hope that my colleagues will vote to approve it.”