Thursday, May 31, 2018

MEF advances MEF 3.0 Ethernet, IP, SD-WAN, and Layer 1 Services

Work continues to advance on MEF 3.0, which is the global services framework unveiled last November for “defining, delivering, and certifying agile, assured, and orchestrated communication services across a global ecosystem of automated networks.”

MEF reported progress on the standardization of orchestration-ready Ethernet, IP, SD-WAN, and Layer 1 services that form a core element of the MEF 3.0.

Specifically, MEF has published two new MEF 3.0 Ethernet and IP specifications, progressed two major MEF 3.0 SD-WAN projects, and moved closer to finalizing a MEF 3.0 Layer 1 service definition specification as soon as 3Q18.

“Expansion of MEF 3.0 standardization work beyond Ethernet to include IP, SD-WAN, and Layer 1 services is critical for enabling the streamlined interconnection and orchestration of a mix of connectivity services across multiple providers,” said Pascal Menezes, CTO, MEF. “Combining this work with the ongoing development of our emerging suite of LSO (Lifecycle Service Orchestration) APIs will pave the way for orchestrated delivery of on-demand, cloud-centric services with unprecedented user- and application-directed control over network resources and service capabilities.”

Here are some highlights:

MEF has just published the Managed Access E-Line Service Implementation Agreement (MEF 62), which defines a new service with a specific set of management and Class of Service (CoS) capabilities designed to accelerate service provisioning and to simplify management of services that traverse multiple operators. The MEF 3.0 Managed Access E-Line (MAEL) service is derived from the MEF 3.0 Access E-Line service specified in MEF 51.

MEF 62 reduces ordering and provisioning complexities when a service provider requires an Operator Virtual Connection (OVC) service from an operator by defining a MAEL service with a simplified set of CoS requirements – e.g., a single CoS name per OVC – coupled with a simplified set of management requirements for SOAM fault management, SOAM performance management, and latching loopback. By leveraging the management capabilities in the MAEL operator’s network, MEF 62 also intends to eliminate the need for a service provider to deploy hardware – e.g., a NID – at the subscriber’s location to monitor services.

AT&T, Bell Canada, Canoga Perkins, Ciena, Cisco, HFR, and Zayo joined Verizon in contributing to MEF 62.

MEF has published the Subscriber IP Service Attributes Technical Specification (MEF 61) as the first in a planned series of MEF 3.0 IP specifications aiming to address these challenges. MEF 61 specifies a standard set of service attributes for describing IP VPNs and Internet access services offered to end-users and will be used as a starting point for defining attributes for operator IP services. It introduces IP UNIs and IP UNI Access Links for describing how a subscriber is connected to a service provider as well as IP Virtual Connections and IP Virtual Connection End Points for describing an IP-VPN or Internet access service between those UNIs. Specific service attributes and corresponding behavioral requirements are defined for each of these entities. These include support for assured services – e.g. multiple classes of service, with performance objectives for each class agreed using a standardized set of performance metrics in a Service Level Specification. Bandwidth Profiles that can be applied to IP services are also described, allowing the bandwidth assigned to each class of service to be agreed in a standard way.

“The immediate value of MEF 61 is that it establishes common terminology and provides the ability to standardize service level agreements for IP services with customers,” said David Ball, editor of MEF 61 and Senior Software Architect, Cisco. “In the longer term, MEF 61 provides the basis for specifying operator IP service attributes, and it will enable standardization of LSO APIs for orchestration of IP services across subscribers, service providers, and wholesale operators.”

Albis-Elcon, Ceragon, Ciena, Coriant, Cox, Ericsson, HFR, RAD, TELUS, TIM, Verizon, Zayo, and ZTE joined Cisco in contributing to MEF 61.

MEF 3.0 SD-WAN -- there are two major SD-WAN initiatives underway: the Multi-Vendor SD-WAN Implementation project and the SD-WAN Service Definition project.  The first project is focused on addressing the rapidly growing problem of orchestrating services over multiple SD-WAN deployments that are based on different technology vendor products. MEF member companies – including SD-WAN vendors Riverbed, VeloCloud (now part of VMware), and Nuage Networks from Nokia and software development services provider Amartus –  are collaborating to use MEF’s new, standardized LSO Presto Network Resource Provisioning (NRP) API to meet these interoperability challenges.

In the second project, MEF members are collaborating to develop an SD-WAN service specification that defines the service components, their attributes, and application-centric QoS, security, and business priority policy requirements to create SD-WAN services. This initiative is led by Riverbed and VeloCloud, now part of VMware, with major contributions from Fujitsu.

MEF 3.0 Layer 1 -- MEF is in the final phase of the review and approval process for a new specification that defines the attributes of a subscriber Layer 1 service for Ethernet and Fibre Channel client protocols – used in LAN and SAN extension for data center interconnect – as well as SONET and SDH client protocols for legacy WAN services. Nokia, Bell Canada, Cisco, and HFR have contributed to this project.

Work already is underway on a companion specification defining Operator Layer 1 services between a UNI and an OTN ENNI (access) and between two OTN ENNIs (transit). This will provide the basis for streamlining the interconnection of multi-domain Layer 1 services.

What is included in MEF 3.0

There are four spokes to the MEF 3.0 wheel
       

  • Standardized, Orchestrated Services - including Carrier Ethernet, wavelength, IP, SD-WAN, Security-as-a-Service, and other virtualized services that will be orchestrated over programmable networks using LSO APIs. MEF 3.0 CE R1 is the first release within the MEF 3.0 framework, while work on standardizing orchestration-ready wavelength, IP, SD-WAN, and security services currently is progressing within MEF.
  •         Open LSO APIs - MEF’s LSO Reference Architecture guides the agile development of standardized LSO APIs that enable orchestration of MEF 3.0 services across multiple providers and over multiple network technology domains (e.g., Packet WAN, Optical Transport, SD-WAN, 5G, etc.). MEF recently announced the first releases of two LSO SDKs (software development kits) that feature inter-provider APIs for address validation, serviceability, and ordering and an intra-provider API for network resource provisioning. The LSO APIs included in these SDKs are available for experimental use by MEF members and associated MEF programs.
  •         Service and Technology Certifications. MEF is increasing the agility of its popular certification programs to accelerate availability and adoption of MEF 3.0 certified services and technologies. Iometrix continues as MEF’s testing partner, but the certification process is now being virtualized and taken into the cloud. A subscription model will be used for that vendors and carriers will be able to certify that their services and technologies comply with the latest MEF 3.0 standards. This should speed up the certification process considerably from days to minutes.
  •         Expanded Community Collaboration. MEF is working with service and technology providers, open source projects, standards associations, and enterprises to realize a shared vision of dynamic services orchestrated across automated networks. MEF has created a new compute, storage, and networking platform – MEFnet – that enables development, testing, integration, and showcasing of prototype MEF 3.0 implementations using open source and commercial products. MEFnet projects will help accelerate the understanding and adoption of MEF 3.0, as well as provide immediate feedback for standards development within MEF.

Advancing MEF 3.0 - Ethernet, IP, SD-WAN, and Layer 1 Services



Dan Pitt provides an overview of recent development with MEF 3.0, the transformational framework for defining, delivering, and certifying agile, assured, and orchestrated communication services across a global ecosystem of automated networks.

Filmed at NetEvents at the Dolce Hayes Mansion in San Jose, California.

See video: https://youtu.be/D_edtEpuu0s


Tier 1 Service Provider deploys OpenSwith + Intent-based Automation



Mansour Karam, CEO of Apstra, discusses the recent deployment by a Tier One Service Provider in the U.S. of OpenSwitch (OPX) on Dell Z9100-ON switches. The network is automated by Apstra's AOS, which provides an intent-based distributed operating system and a data center application suite for service agility, increased uptime and dramatically improved infrastructure TCO.

Filmed at NetEvents in San Jose, California.

See video:  https://youtu.be/QBVtss71x4k




Zayo announces dark fibre contract in UK

Zayo announced a major contract to provide dark fiber infrastructure in the UK for a leading global carrier. The solution includes 1,100 miles (1,800 km) of dark fiber to connect several data centers across the UK Zayo’s UK network extends from Glasgow and Edinburgh in the north, to Birmingham and Manchester and includes dense metro fiber in London. Financial terms were not disclosed.

“We have a strong relationship with this carrier and have worked closely with them to provide a dark fiber solution that delivers high performance and low latency to support a growing volume of data traffic,” said Annette Murphy, managing director of Europe at Zayo. “The carrier is relying on the infrastructure as a key element of its strategic growth and densification efforts.”

Ciena to acquire Packet Design for network analytics and path computation

Ciena agreed to acquire privately-held Packet Design, a provider of network performance management software focused on Layer 3 network optimization, topology and route analytics. Financial terms were not disclosed.

Packet Design's portfolio includes Route Explorer, an IP/MPLS route analytics software that provides management visibility into routing behavior for all IGP and BGP protocols, Layer 2/3 VPNs, traffic engineering tunnels, segment routing and multicast with real-time monitoring, historical reporting, and what-if modeling capabilities.

Ciena said the acquisition will help accelerate its Blue Planet software strategy by extending its intelligent automation capabilities beyond Layers 0-2 and into IP with critical new capabilities to help customers optimize service delivery and maximize network utilization. Specifically, the combination of the Blue Planet software platform and Packet Design’s performance analytics and service path computation capabilities will form a unique, micro-services-based platform that delivers real-time analytics, optimization and orchestration capabilities to support the broadest range of closed-loop automation use cases across multi-layer, multi-vendor networks.

“Blue Planet is already one of the premier brands in the network automation space. The addition of Packet Design will enhance our position by enabling customers to realize networks that are more adaptive – capable of self-optimizing and self-healing for faster time-to-market for new services, more efficient and lower cost network operations, and the ability to deliver an overall better customer experience,” said Rick Hamilton, senior vice president of Global Software and Services at Ciena.


ICN2 subsea cable to link Vanuatu to Solomon Islands

Construction is expected to begin shortly on the ICN2 submarine cable linking Vanuatu to Solomon Islands in the south Pacific. The 1,632km cable will provide initial 200G high-capacity access to several landing sites utilizing SL14-A1 cables and Ciena Submarine Line Terminating Equipment (SLTE). The project is sponsored by Interchange Limited, a Vanuatu-based consortium. TE SubCom is the general contractor. The ready for service date is Q4 2019.

“This submarine cable link is an important part of connectivity for this area of the world,” said Sanjay Chowbey, president of TE SubCom. “We are pleased to work with Interchange Limited and apply our expertise and regional knowledge to this project.”

“Interchange Limited is committed to improved ICT infrastructure to communities we serve throughout the Melanesian region. The ICN2 cable project truly supports our mission,” said Simon Fletcher, CEO of Interchange Limited. “ICN2 is the first CIF submarine cable to the Solomon Islands. With the planned future systems in the region, we feel confident they will be complementary and serve to build a redundant and reliable network. This should provide some confidence to regional investors and data center partners.”

Marvell posts revenue of $605 billion, next quarter excludes $7m is sales to ZTE

Marvell Technology Group reported revenue for its first quarter of fiscal 2019 was $605 million, which exceeded the midpoint of the Company's guidance provided on March 8, 2018. GAAP net income from continuing operations for the first quarter of fiscal 2019 was $129 million, or $0.25 per diluted share. Non-GAAP net income from continuing operations for the first quarter of fiscal 2019 was $165 million, or $0.32 per diluted share. Cash flow from operations for the first quarter was $129 million.

"Fiscal 2019 is off to a strong start, driven by the performance of our storage, networking and connectivity businesses which grew 7% year over year in Q1. Marvell's R&D engine is executing well, and our newly announced products are fueling a growing design win pipeline," said Marvell President and CEO Matt Murphy. "Overall, I'm pleased with the results and thank the entire Marvell team for their effort and contribution."

Revenue for the company's second quarter of 2019 is expected to be $600 million to $630 million. The guidance range excludes approximately $7 million in revenue from a Chinese OEM due to the trade restrictions imposed by the U.S. government.

Ciena posts Q2 revenue of $730M, up 3% yoy

Ciena reported Q2 revenue of  $730.0 million, up 3% year over year as compared to $707.0 million for the fiscal second quarter 2017.

GAAP net income for the fiscal second quarter 2018 was $13.9 million, or $0.09 per diluted common share, which compares to a GAAP net income of $38.0 million, or $0.25 per diluted common share, for the fiscal second quarter 2017. Ciena's adjusted (non-GAAP) net income for the fiscal second quarter 2018 was $33.8 million, or $0.23 per diluted common share, which compares to an adjusted (non-GAAP) net income of $48.2 million, or $0.30 per diluted common share, for the fiscal second quarter 2017.

"We delivered strong revenue and record order flow in the second quarter as we continue to broaden our leadership and capture market share. Gross margin was impacted by several new, international service provider deployments in their early stages; however, we are confident in our ability to return to our normalized gross margin levels. We anticipate strong revenue growth in the second half of fiscal 2018 and we remain confident in our three-year financial targets," stated Ciena President and CEO Gary B. Smith.

Some highlights from the company's quarterly financial presentation:

  • Non-telco represented 34% of total revenue
  • Direct webscale was 17% of total revenue
  • North America represented 59.1% of Q2 revenue
  • EMEA was up QoQ and YoY at 16.7% of total revenue
  • APAC contributed over 20% of total revenue; India revenue was over 10% of total revenue at $79 million
  • WaveLogic Ai: 29 total customers
  • Waveserver: 84 customers

AWS announces Pay-per-Session Pricing for Amazon QuickSight

Amazon Web Services (AWS) announced pay-per-session pricing for Amazon QuickSight, which is a fast, cloud-powered, business analytics service.

Pay-per-session pricing for Amazon QuickSight dashboards starts at $0.30 per session up to a maximum of $5 per user, per month, and is available in Amazon QuickSight Enterprise Edition in all supported AWS regions.

“With highly scalable object storage in Amazon Simple Storage Service (Amazon S3), data warehousing at one-tenth the cost of traditional solutions in Amazon Redshift, and serverless analytics offered by Amazon Athena, customers are moving data into AWS at an unprecedented pace,” said Dorothy Nicholls, Vice President of Amazon QuickSight at Amazon Web Services, Inc. “What's changed is that virtually all knowledge workers want easy access to that data and the insights that can be derived. It's been cost-prohibitive to enable that access for entire companies until the Amazon QuickSight pay-per-session pricing-- this is a game-changer in terms of information and analytics access.”

Sierra Wireless' CEO announces retirement

Sierra Wireless Jason Cohenour will retire from his position as President and Chief Executive Officer and will be stepping down as a director of the company.

Kent Thexton, Chair of Sierra’s Board of Directors, has been named interim CEO. A search is underway for a permanent replacement.

“On behalf of the entire Board, I want to thank Jason for his significant contributions to Sierra Wireless throughout his 22 years with the company, including the last 12 years as CEO,” said Mr. Aasen. “Thanks to Jason’s vision and leadership, Sierra successfully refocused its strategy and transitioned into a global leader in the IoT market. The Board will be taking this opportunity to recruit a world class leader to guide the Company through its next phase of growth and value creation. Kent has extensive experience serving in senior leadership positions in the international wireless and technology industries, and I am confident he is the right person to guide Sierra while we conduct a thorough search for our next CEO.”

Wednesday, May 30, 2018

AWS goes live with Neptune graph database

AWS announced general availability of Amazon Neptune, a fully-managed graph database service.

Amazon Neptune efficiently stores and navigates highly connected data, allowing developers to create sophisticated, interactive graph applications that can query billions of relationships with millisecond latency.

Amazon Neptune is highly available and durable, automatically replicating six copies of data across three Availability Zones and continuously backing up data to Amazon Simple Storage Service (Amazon S3). Amazon Neptune is designed to offer greater than 99.99 percent availability and automatically detects and recovers from most database failures in less than 30 seconds. Amazon Neptune also provides advanced security capabilities, including network security through Amazon Virtual Private Cloud (VPC), and encryption at rest using AWS Key Management Service (KMS).

“Amazon Neptune is a key part of the toolkit we use to continually expand Alexa’s knowledge graph for our tens of millions of Alexa customers—it’s just Day 1 and we’re excited to continue our work with the AWS team to deliver even better experiences for our customers,” said David Hardcastle, Director of Amazon Alexa, Amazon.

https://aws.amazon.com/neptune


AT&T expects ruling on Time Warner merger on June 12

AT&T expects a ruling on June 12 in the suit lawsuit brought against AT&T and Time Warner by the U.S. Department of Justice. If the court rules in its favor, AT&T is ready to close on the merger. The company anticipates annualized cost synergies of $1.5 billion by the end of the third year after close.

Speaking at this week's Cowen Technology, Media and Telecom Conference,  John Stephens, senior vice president and chief financial officer of AT&T, also stated:

  • AT&T expects to expand its video offerings to better address each customer segment and grow its total video subscriber base.
  • This includes AT&T’s top-of-the-line services DIRECTV and U-verse and OTT service DIRECTV NOW. 
  • Following the close of the Time Warner deal, AT&T plans to introduce AT&T Watch, a skinny package without local programming or sports-only channels. 
  • By the end of the year, the company also expects to launch a premium streaming experience that will compete with traditional linear TV products for in-home use. The product will be app-based with a small device that connects to customers’ TVs and home broadband. 
  • Advertising is a significant part of the company’s video strategy and noted the vast ad inventory AT&T will have across its platforms following the Time Warner acquisition. 
  • The FirstNet nationwide public safety broadband network for America’s first responders is off to a strong start. AT&T expects FirstNet capital spending of $2 billion this year.
  • AT&T plans to reach 500 markets with 5G Evolution technology by the end of 2018. With 5G Evolution, the company is seeing speeds at two times of standard LTE in many areas.

Facebook plans next data center in Utah

Facebook will build one of its hyperscale data centers in Eagle Mountain, Utah.

The 970,000 square foot Eagle Mountain Data Center will be powered by 100% renewable energy.

Facebook said the Eagle Moutain project represents an investment of more than $750 million.

The data center will use outside air to cool its servers.

LF Networking adds global carriers as members

The LF Networking Fund (LFN), which facilitates collaboration and operational excellence across open networking projects, is gaining traction with global telecom service providers. New members include Sprint, KT, KDDI, SK Telecom, Swisscom, and Telecom Italia.

Addiiontal members include AT&T,  Bell Canada, China Mobile, China Telecom, China Unicom, Comcast, KT, KDDI, Orange, PCCW Global, Reliance Jio, SK Telecom, Turk Telecom, Verizon, Vodafone and others.

THe LF Networkin Fund said telecom service providers are increasingly developing solutions and deploying LFN projects within their networks, with ONAP, OPNFV and ODL  as critical components to enable SDN/NFV, 5G, big data, Artificial Intelligence (AI) and Internet of Things (IoT) network services.

"I am delighted to see expanded membership and support from even more of the world's leading telecom service providers," said Arpit Joshipura, general manager of Networking and Orchestration, The Linux Foundation. "As LFN now enables over 65 percent of the global mobile subscribers, we can better see the impact of open source on the networking ecosystem, signaling a broader industry trend toward innovation, harmonization and accelerated deployment."

"As a leading telecom company, we put great importance in network automation that utilizes open source to cope with maintenance and management of both virtualized and existing network complexly," said Yoshiaki Uchida, senior managing executive officer, Director of KDDI. "We are eager to continue to work with the ONAP community to further progress network management for the 5G era."

"5G technology is expected to dynamically provide various high-quality applications and services through virtualization-based open source technology. By joining LFN,  KT will actively participate in the open source ecosystem, which is set to lead standardization and development of next-generation 5G networks," said Dr. Hongbeom Jeon, head of KT Infra Lab. "As a result, we will collectively pioneer the new era of smart and cost-effective 5G platforms."

IDC: Worldwide server market surges 39% yoy in Q1

Vendor revenue in the worldwide server market increased 38.6%, year over year to $18.8 billion during the first quarter of 2018 (1Q18), according to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker. Worldwide server shipments increased 20.7% year over year to 2.7 million units in 1Q18.

IDC said the growth is driven by a market-wide enterprise refresh cycle, strong demand from cloud service providers, increased use of servers as the core building blocks for software-defined infrastructure, broad demand for newer CPUs such as Intel's Purely platform, and growing deployments of next-generation workloads.

"Hyperscale growth continued to drive server volume demand in the first quarter," said Sanjay Medvitz, senior research analyst, Servers and Storage at IDC. "While various OEMs are finding success in this space, ODMs remain the primary beneficiary from the quickly growing hyperscale server demand, now accounting for roughly a quarter of overall server market revenue and shipments."

Some key findings cited by IDC:

  • Revenue in the worldwide server market increased 38.6% year over year to $18.8 billion while shipments grew 20.7% to 2.7 million units during the first quarter of 2018.
  • 1Q18 marks the third consecutive quarter of double-digit growth.
  • Average selling prices (ASPs) increased during the quarter due to richer configurations and increased component costs. The increased ASPs also contributed to revenue growth.
  • Volume server revenue increased by 40.9% to $15.9 billion, while midrange server revenue grew 34% to $1.7 billion. High-end systems grew 20.1% to $1.2 billion.
  • Dell Inc. and HPE/New H3C Group were statistically tied for first in the worldwide server market with 19.1%, and 18.6% market shares respectively in 1Q18. 
  • Dell was the fastest growing server vendor among the top 5 companies, growing revenue 50.6% year over year to $3.6 billion and gaining 1.5 points of revenue share year over year on a strong performance in all major geographic regions. 
  • HPE/New H3C Group revenue increased 22.6% year over year in 1Q18 to $3.5 billion. HPE's share and year-over-year growth rate include revenues from the H3C joint venture in China that began in May of 2016; thus, the reported HPE/New H3C Group combines server revenue for both companies globally. 
  • Lenovo, IBM, and Cisco were all statistically tied for the third position in the market with respective shares of 5.8%, 5.3%, and 5.2%. 
  • The ODM Direct group of vendors grew revenue by 57.1% (year over year) to $4.6 billion. 
  • Dell Inc. led the worldwide server market in terms of unit shipments, accounting for 20.6% of all units shipped during the quarter.


ExteNet Systems to acquire Hudson Fiber Network

ExteNet Systems, a private developer, owner and operator of distributed networks across the United States, agreed to acquire Hudson Fiber Network (HFN). Financial terms were not disclosed.

Hudson Fiber Network (HFN) is a data transport provider which has a significant metro fiber network in the greater New York City area and operates a national wide-area network with key international points of presence.


"We are pleased to announce our intention to acquire Hudson Fiber Network to accelerate growth of ExteNet’s Optical Network Solutions business,” said Ross Manire, President and CEO of ExteNet Systems. “We have served the northeast region, including New York City, for many years with our fiber, small cell and indoor network solutions. We plan to leverage the core competencies of both companies to offer our customers an expanded portfolio of carrier and enterprise solution offerings and rapidly expand into other major markets by leveraging ExteNet’s extensive fiber plant.”

OFS expands fiber portfolio

OFS has expanded its AccuTube+ Rollable Ribbon Cable family to include cables with 432, 576 and 864 fiber counts featuring rollable ribbon technology in a ribbon-in-loose-tube cable design.

This expanded product line of 100% gel-free cables will offer both single jacket/all-dielectric and light armor constructions.

OFS said rollable ribbon fiber optic cables can help users achieve significant time and cost savings using mass fusion splicing while also doubling their fiber density in a given duct size compared to traditional flat ribbon cable designs.

Each OFS rollable ribbon features 12 individual optical fibers that are partially bonded to each other at predetermined points. These ribbons can be "rolled" into a flexible and compact bundle that offers the added benefit of improved fiber routing and handling in closure preparation.

The AccuTube+ Rollable Ribbon Cable product line also features cables with 1728 fibers in both single jacket and light armor designs and 3456 fibers in a single jacket construction. All of these cables meet or exceed the requirements of Telcordia GR-20 issue 4.

http://www.ofsoptics.com

Masergy expands global bandwidth-on-demand to SD-WAN

Masergy announced the extension of their Intelligent Service Control with Global Bandwidth on Demand for Managed SD-WAN.

The Global Bandwidth on Demand feature is built into Masergy’s Intelligent Service Control (ISC) customer portal enabling customers to instantly ramp up or reduce Managed SD-WAN bandwidth by location. Enterprise IT managers typically use this feature to accommodate data back-up, multi-site video conferences, disaster recovery measures or other business requirements that use atypical bandwidth at high speeds. As with the private network, Masergy Global Bandwidth on Demand for public links can also be calendarized, so users can pre-select times throughout the week to increase bandwidth and ensure uptime for scheduled analytics projects or data backups. The customer is billed incrementally only for the specific spike of bandwidth usage.

“The one certainty today in enterprise information technology is rapid change,” said Chris MacFarland, CEO, Masergy. “As the complexity of the enterprise application environment increases, IT professionals are turning to software-defined hybrid networks to deliver superior user application experiences. This enhancement gives IT professionals complete control of their global hybrid networks, regardless of the access methodology, by extending our patented service control capabilities to our fully integrated Managed SD-WAN solution.”

“Enterprises are increasingly turning to service providers who deliver the flexibility of hybrid WAN architectures that leverage both public internet and private MPLS links," said Mike Sapien, VP and Chief Analyst at Ovum. “Masergy designs global network solutions based on their customer's users, application needs and each location's risk tolerance. With its latest innovation, the Masergy Global Bandwidth on Demand solution provides the ability to not only increase public network bandwidth dynamically in real time or at predetermined times, but also provides customers reliable business continuity if either private or public networks fail.”

Tuesday, May 29, 2018

AT&T NetBond brings direct connect to Google Cloud Platform

AT&T and Google Cloud announced two areas of collaboration.

First, business customers will be able to use AT&T NetBond for Cloud to connect in a highly secure manner to Google Cloud Platform. Google's Partner Interconnect offers organizations private connectivity to GCP and allows data centers geographically distant from a Google Cloud region or point of presence to connect at up to 10 Gbps. Google has joined more than 20 leading cloud providers in the NetBond® for Cloud ecosystem, which gives access to more than 130 different cloud solutions.

Second, G Suite, which is Google's cloud-based productivity suite for business including Gmail, Docs and Drive, is now available through AT&T Collaborate, a hosted voice and collaboration solution for businesses.

"We're committed to helping businesses transform through our edge-to-edge capabilities. This collaboration with Google Cloud gives businesses access to a full suite of productivity tools and a highly secure, private network connection to the Google Cloud Platform," said Roman Pacewicz, chief product officer, AT&T Business. "Together, Google Cloud and AT&T are helping businesses streamline productivity and connectivity in a simple, efficient way."

"AT&T provides organizations globally with secure, smart solutions, and our work to bring Google Cloud's portfolio of products, services and tools to every layer of its customers' business helps serve this mission," said Paul Ferrand, President Global Customer Operations, Google Cloud. "Our alliance allows businesses to seamlessly communicate and collaborate from virtually anywhere and connect their networks to our highly-scalable and reliable infrastructure."

Semtech announces PAM4 clock and data recovery platform

Semtech announced a PAM4 clock and data recovery (CDR) platform optimized for low power and low-cost PAM4 optical interconnects used in data center and active optical cable (AOC) applications.

Semtech's Tri-Edge is a new CDR platform technology being developed for the PAM4 communication protocol. It builds on the success of Semtech’s ClearEdge NRZ-based CDR platform technology and extends it to PAM4 signaling.

The company says its Tri-Edge CDR platform will be applicable for 100G, 200G and 400G requirements.

“The rapidly growing demand for bandwidth in the data center market requires a disruptive solution to meet the power, density and cost requirements. By combining leading-edge technologies with a focused application, we can enable a disruptive solution that we believe will meet the needs of the data centers in both the near-term and long-term,” said Imran Sherazi, Vice President of Marketing and Applications for Semtech’s Signal Integrity Products Group.

Semtech notes that its ClearEdge CDRs are the world’s most widely selected optical transceiver CDRs for use in 10G applications and 100G data center applications.


Oclaro and Acacia collaborate on 100/200G CFP2-DCO

Acacia Communications and Oclaro are collaborating on a multi-vendor environment of fully interoperable CFP2-DCO modules based on Acacia’s Meru DSP.

Specifically, Oclaro plans to launch a new CFP2-DCO module that will feature plug-and-play compatibility with the Acacia CFP2-DCO, providing customers with two proven coherent optics suppliers for the 100/200G CFP2-DCO form factor. 

CFP2-DCOs integrate the coherent DSP into the pluggable module. The digital host interface enables simpler integration between module and system resulting in faster service activation and a pay-as-you-grow deployment model for telecommunication providers whereby the cost of additional ports can be deferred until additional services are needed.

The CFP2-DCO pluggable form factor, which is being introduced by multiple network equipment manufacturers (NEMs) in switch, router, and transport platforms, supports four times higher density than current generation 100G CFP-DCO solutions by doubling the data rate.

The companies said their CFP2-DCO pluggable coherent modules support transmission speeds of 100G and 200G for use in access, metro and data center interconnect markets.  In addition to proprietary operating modes, both companies intend to support the requirements of the Open ROADM MSA for interoperability at 100G.

“Network operators and our system partners have been excited about the ramp of our CFP2-DCO module,” said Benny Mikkelsen, Chief Technology Officer of Acacia Communications.  “By partnering with Oclaro to ensure interoperability with their Meru-based CFP2-DCO module, we believe we will be better positioned to address the DCO market as industry trends shift favorably toward the CFP2 form factor.  We are excited about our relationship with Oclaro and believe that broader adoption of 200G CFP2-DCO modules will be mutually beneficial to our two companies and the customers we serve.”

“Our 43Gbaud Coherent Transmitter Receiver Optical Sub-Assembly (TROSA) is at the heart of our CFP2-DCO. The TROSA leverages proven Indium Phosphide PIC technology from Oclaro’s highly successful CFP2-ACO to achieve industry-leading optical performance in a small form factor,” said Beck Mason President of the Integrated Photonics Business at Oclaro. “By establishing a fully interoperable solution with Acacia, our customers will have two sources of supply for these critical components, enabling them to efficiently upgrade their networks to higher speeds.”

NYU develops AR learning tool using Verizon's 5G testbed

NYU’s Future Reality Lab are using Verizon’s pre-commercial 5G technology at Alley, a co-working space and site of Verizon’s 5G incubator in New York City, to develop ChalkTalk, an open source AR learning tool that renders multimedia objects in 3D.

The idea is to use AR on mobile devices to create more effective learning tools that are able to update and respond in real time as the instructor makes his or her point.

“We’ve been able to test and experiment with the 5G technology,” said NYU's Dr. Ken Perlin. “We’re looking at simple use cases now, but will be looking at more involved, more interesting applications as time goes on.”

http://www.verizon.com/about/news/chalktalk--using-5g-and-ar-enhance-learning-experience

Samsung hails the rapid pace in 5G standardization

Two years after hosting the Third Generation Partnership Project (3GPP) meeting in Busan, Korea that kicked off the 5G standardization process,  Samsung Electronics this month hosted another 3GPP meeting to wrap-up the first phase of the effort.

Based on this latest meeting in Busan, the 3GPP is expected to make the final announcement of 5G phase-1 standards at a general meeting to be held in the U.S. in June. The 5G standardization process that started in April 2016 will end next month after a 27-month journey, significantly faster than the LTE standards development process.

In a blog posting, Samsung recounts its contributions to the hectic 5G development process.

“Samsung Electronics has been working on ultra-high frequency three years faster than other companies,” said Younsun Kim, Principal Engineer of Standards Research Team at Samsung Research and Vice Chairman of RAN1 working group in 3GPP. “When the world started to discuss the setting of standards, Samsung had already developed the related technologies. We had strong aspirations to bring the standardization for 5G commercialization faster than any other company in the world.”

Some notes from Samsung:

  1. Within 3GPP, Samsung has been in charge of four positions including the Chair of Service & System TSG and Chair of RAN4 working group, which oversees the frequency and performance that is key to 5G, and in 2018, one more Chair position – SA6 working group for mission-critical applications 
  2. Samsung has registered 1,254 patents with ETSI as essential to 5G. 

https://news.samsung.com/global/pioneer-in-5g-standards-part-2-a-hectic-27-month-journey-to-achieve-standardization

Supermicro unveils 2 PetaFLOP SuperServer based on New NVIDIA HGX-2

Super Micro Computer is using the new NVIDIA HGX-2 cloud server platform to develop a 2 PetaFLOP "SuperServer" aimed at artificial intelligence (AI) and high-performance computing (HPC) applications.

"To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform that will deliver more than double the performance," said Charles Liang, president and CEO of Supermicro. "The HGX-2 system will enable efficient training of complex models. It combines 16 Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate memory to deliver unmatched compute power."

The design packs over 80,000 CUDA cores.

Mellanox intros Hyper-scable Enterprise Framework

Mellanox Technologies introduced its Hyper-scalable Enterprise Framework for private cloud and enterprise data centers.

The five key elements of the ‘Mellanox Hyper-scalable Enterprise Framework’ are:
  • High Performance Networks – Mellanox end-to-end suite of 25G, 50G, and 100G adapters, cables, and switches is proven within hyperscale data centers who have adopted these solutions for the simple reason that an intelligent and high-performance network delivers total infrastructure efficiency
  • Open Networking – an open and fully disaggregated networking platform is key to scalability and flexibility as well as achieving operational efficiency
  • Converged Networks on an Ethernet Storage Fabric – a fully converged network supporting compute, communications, and storage on a single integrated fabric
  • Software Defined Everything and Virtual Network Acceleration – Enables enterprise to enjoy the benefits of the hyperscalers who have embraced software-defined networking, storage, and virtualization – or software-defined everything (SDX)
  • Cloud Software Integration – networking solutions that are fully integrated with the most popular cloud platforms such as OpenStack, vSphere, and Azure Stack and support for advanced software-defined storage solutions such as Ceph, Gluster, Storage Spaces Direct, and VSAN
“With the advent of open platforms and open networking it is now possible for even modestly sized organizations to build data centers like the hyperscalers do,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “We are confident and excited to release the Mellanox Hyper-scalable Enterprise Framework to the industry – and to provide an open, intelligent, high performance, accelerated and fully converged network to enable enterprise and private cloud architects to build a world-class data center.”

Samsung hits mass production of 10nm-Class 32GB DDR4

Samsung Electronics Co. started mass producing the industry’s first 32-gigabyte (GB) double data rate 4 (DDR4) memory.

The small outline dual in-line memory modules (SoDIMMs) are used in gaming laptops.

Samsung said that compared to its 16GB SoDIMM based on 20nm-class 8-gigabit (Gb) DDR4, which was introduced in 2014, the new 32GB module doubles the capacity while being 11 percent faster and approximately 39 percent more energy efficient. A 64GB laptop configured with two 32GB DDR4 modules consumes less than 4.6 watts (W) in active mode and less than 1.4W when idle.

Salesforce is now on a $12 billion per year run rate

Salesforce reported first quarter revenue og $3.01 billion, an increase of 25% year-over-year, and 22% in constant currency. Subscription and support revenues were $2.81 billion, an increase of 27% year-over-year. Professional services and other revenues were $196 million, an increase of 4% year-over-year. First quarter GAAP diluted earnings per share was $0.46, and non-GAAP diluted earnings per share was $0.74. The company also reported unearned revenue (deferred revenue) of $6.20 billion, an increase of 25% year-over-year, and 23% in constant currency.

"Salesforce delivered more than $3 billion in revenue in the first quarter, surpassing a $12 billion annual revenue run rate," said Marc Benioff, chairman and CEO, Salesforce. "Our relentless focus on customer success is yielding incredible results, including delivering nearly two billion AI predictions per day with Einstein."

KKR to acquire BMC for its enterprise software

KKR, a leading global investment firm, agreed to acquire BMC for an undisclosed sum. BMC is currently owned by a private investor group led by Bain Capital Private Equity and Golden Gate Capital together with GIC, Insight Venture Partners and Elliott Management.

Founded in 1980, BMC is a leading systems software provider which helps enterprise organizations manage and optimize information technology across cloud, hybrid, on-premise, and mainframe environments. The company claims more than 10,000 customers worldwide, including 92% of the Forbes® Global 100.

"With the support and partnership of our Investor Group, BMC significantly accelerated its innovation of new technologies and new go-to-market capabilities over the past five years," said Peter Leav, President and Chief Executive Officer of BMC. "Our growth outlook remains strong as BMC is competitively advantaged to continue to invest and win in the marketplace. Our customers can expect the BMC team to remain focused on providing innovative solutions and services with our expanding ecosystem of partners to help them succeed across changing enterprise environments. We are excited to embark on our next chapter with KKR as our partner."

"In an ever-changing IT environment that is only becoming more complex, companies that help simplify and manage this essential infrastructure for their enterprise customers play an increasingly important role," said Herald Chen, KKR Member and Head of the firm's Technology, Media & Telecom (TMT) industry team, and John Park, KKR Member. "With more than 10,000 customers and 6,000 employees, BMC is a global leader in managing digital and IT infrastructure with a broad portfolio of software solutions.  We are thrilled to partner with the talented BMC team to accelerate growth—including via M&A—building on BMC's deep technology expertise and long-standing customer relationships."

Toshiba debuts portable SSDs based on 64-layer 3D Flash

Toshiba Memory America introduced its XS700 Series of portable solid state drives (SSDs) offering capacity of up to 240GB.

The new drives use Toshiba's in-house 3D flash memory, 64-layer BiCS FLASH technology. The XS700 includes USB 3.1 Gen 2 support, and features the latest USB Type-CTM connector.


Monday, May 28, 2018

Start-up profile: Rancher Labs, building container orchestration on Kubernetes

Rancher Labs is a start-up based in Cupertino, California that offers a container management platform that has racked up over four million downloads. The company recently released a major update for its container management system. Recently, I sat down with company co-founders Sheng Liang (CEO) and Shannon Williams (VP of Sales) to talk about Kubernetes, the open source container orchestration system that was originally developed by Google. Kubernetes was initially released in 2014, about the time that Rancher Labs was getting underway.

Jim Carroll, OND: So where does Kubernetes stand today?

Sheng Liang, Rancher Labs: Kubernetes has come a long way. When we started three years ago, Kubernetes was also just getting started. It had a lot of promise, but people were talking about orchestration wars and stuff. Kubernetes had not yet won but more importantly, it wasn't really useful.  In the early days, we couldn't even bring ourselves to say that we were going to focus exclusively on Kubernetes. It was not that we did not believe in Kubernetes, but it just didn't work for a lot of users. Kubernetes was almost seen as an end unto itself. Even standing up Kubernetes was such a challenge back then that just getting it to run became an end goal.  A lot of people in those days were experimenting with it, and the goal was simply to prove - hey- you've got a Kubernetes cluster.  Success was to get a few simple apps.  And its come a long way in 3 years.


A lot of things have changed. First, Kubernetes is now really established as the de facto container orchestration platform. We used to support Mesosphere, we used to support Swarm, and we used to build our own container orchestrations platform, which we called Cattle. We stopped doing all of that to focus entirely on Kubernetes. Luckily, the way we developed Cattle was closely modeled on Kubernetes, sort of an easy-to-use version of Kubernetes. So we were able to bring a lot our experience to run on top of Kubernetes. And now it turns out that we don't have to support all of those other frameworks. Kubernetes has settled that. It is now a common tool that everyone can use.

JC: The Big Three cloud companies are now fully behind Kubernetes, right?

Sheng Liang: Right. I think that for the longest time a lot of vendors were looking for opportunities to install and run Kubernetes. That kept us alive for a while. Some of the early Kubernetes deals that we closed were about installing Kubernetes.  These projects then turned to operation contracts because people thought they were going to need to help with upgrading or just maintaining the health of the cluster. This got blown out of the water last year when all of the big cloud providers started to offer Kubernetes as a service.

If you are on the cloud already, there is really no reason to stand up your own Kubernetes cluster.

Well, we're really not quite there yet, even though Amazon announced EKS in November, it is not even GA yet. It is still in closed beta status, but later this year Kubernetes as a service should become a commercial reality. And there are other benefits too.

I'm not sure about Amazon, but both Google and Microsoft  have decided to not charge for the management plane, so whatever resource you use to run the database, and the control plane nodes, you don't really pay for, I guess they must have a very efficient way of running it on some shared infrastructure. That's what I suspect. This allows them to amortize that cost on what they charge for the worker nodes.

The way people set up Kubernetes clusters in the early days was actually very wasteful. Like you would use three nodes for ECD and you would use two nodes for the control plane and then when setting it up people would throw in two more nodes for workers. So, they were using five nodes to manage two nodes, while paying for seven.

With cloud services, you don't have to do that. I think this makes Kubernetes table stakes. It is not just limited to the cloud.  I think it's really wherever you can get infrastructure. Enterprise customers, for instance, are still getting infrastructure from VMware. Or they get it from Nutanix.

All of the cloud companies have announced, or will shortly announce, support for Kubernetes out of the box. Kubernetes then will equate to infrastructure, just like virtual machines, or virtual SANS.

JC: So, how is Kubernetes actually being used now? Is it a one-way bridge or a two-way bridge for moving workloads? Are people actually moving workloads on a consistent basis, or it basically a one-time move to a new server or cloud?

Shannon Williams, Rancher Labs: Portability is actually less important than other features. It may be the sexy part of Kubernetes to say that you can move clusters of containers. The reality is that Kubernetes is just a really good way to run containers reliably.

The vast majority of people who are running containers are not using Kubernetes for the purpose of moving containers between clouds.  The vast majority of people running Kubernetes are doing so because it is more reliable than running containers directly on VMs. It is easier to use Kubernetes from an operational perspective. It is easier from a development perspective. It is easier from a testing perspective. So if you think of the value prop that Kubernetes represents, it comes down to faster development cycles, better operations. The portability is kind of the cherry on top of the Sundae.

It is interesting that people are excited about the portability enabled by Kubernetes, and I think it will become really important over the long term, but it is just as important that I can run it on my laptop as that I can run it on one Kubernetes cluster versus another.

Sheng Liang: I think that is a very important point. The vast major of the accounts we are familiar with run Kubernetes at just one place. That really tells you something about the power of Kubernetes. The fact that people are using this at just one place really tells you that portability is not the primary motivator.  The primary benefit is that Kubernetes is really a rock-solid way to run containers.

JC: What is the reason that Kubernetes is not being used so much for portability today? Is the use case weak for container transport? I would guess that a lot of companies would want to move jobs up to the cloud and back again.

Sheng Liang:  I just don't think that portability is the No.1 requirement for companies using containers today. Procurement teams are excited about this capability but operations people just don't need it right now.

Shannon Williams: From the procurement side, knowing that your containers could be moved to another cloud gives you the assurance that you won't be locked in.

But portability in itself is a complex problem. Even Kubernetes does not solve all the issues of porting an application from one system to another. For instance, I may be running Kubernetes on AWS but I may also be running an Amazon Relational Database (RDS) service as well.  Kubernetes is not going to magically support both of these in migrating to another cloud. There is going to be work required. I think we are still a ways away from ubiquitous computing but we are heading into a world where Kubernetes is how you run containers and containers are going to be the way that all microservices and next-gen applications are built. It may even be how I run my legacy applications. So, having Kubernetes everywhere means that the engineers can quickly understand all of these different infrastructure platforms without having to go through a heavy learning curve. With Kubernetes they will have already learned how to run containers reliably wherever it happens to be running.

JC: So how are people using Kubernetes? Where are the big use cases?

Shannon Williams: I think with Kubernetes we are seeing the same adoption pattern as with Amazon. The initial consumers of Kubernetes were people who were building early containerized applications, predominantly microservices, cloud-native Web apps, mobile apps, gaming, etc. One of the first good use cases was Pokemon Go. It needed massively-scalable systems and ran on Google Cloud. It needed to have systems that could handle rapid upgrades and changes. The adoption of Kubernetes moved from there to more traditional Web applications, to the more traditional applications.

Every business is trying to adopt an innovative stance with their IT department.  We have a bunch of insurance companies as customers. We have media companies as customers. We have many government agencies as customers, such as the USDA -- they run containers to be able to deliver websites. They have lots of constituencies that they need to build durable web services for.  These have to run consistently. Kubernetes and containers give them a lot of HA (high availability).

A year or so ago we were in Phase 0 with this movement. Now I would say we are entering Phase 1 with many new use cases. Any organization that is forward-looking in their IT strategy is probably adopting containers and Kubernetes. This is the best architecture for building applications.

JC: Is there physical limit to how far you can scale with Kubernetes?

Shannon Williams:  It is pretty darn big. You're talking about spanning maybe 5,000 servers.

Sheng Liang: I don't think there is a theoretical limit to how big you can go, but in practice, there is a database that eventually will bottleneck. The might be the limiting factor.

 I think some deployments have hit 5,000 nodes and each node these days could actually be a one terabyte machine. So that is actually a lot of resources. I think it could be made bigger, but so far that seems to be enough.

Shannon Williams: The pressure to hit that maximum size of 5,000 nodes or more in a cluster really is not applicable to the vast majority of the market.

Sheng Liang: And you could always manage multiple clusters with load balancing. It is probably not a good practice anyway to put everything in one superbig cluster.

Generally, we are not seeing people create huge clusters across multiple data centers or multiple regions.

Shannon Williams: In fact, I would say that we are seeing the trend move in the opposite direction.  Which is that the number of clusters in an organization is increasing faster than the size of any one cluster. What we see is any application that is running probably has at least two clusters available  -- one for testing and one for production.  There are often many divisions inside a company that push this requirement forward. For instance, a large media company has more than 150 Kubernetes clusters -- all deployed by different employees in different regions and often running different versions of their software. The even have multiple cloud providers. I think we are heading in that direction, rather than one massive Kubernetes cluster to rule them all.

Sheng Liang:  This is not what some of the web companies initially envisioned for Kubernetes.  When Google originally developed Kubernetes, they were used to the model where you have a very big pool of resources with bare metal servers. Their challenge was how to schedule all the workloads inside of that pool. When enterprises started adopting Kubernetes, one thing that immediately changed was that they really don't have the operational maturity to put all their eggs in one basket and make that really resilient. Second, because all of them were using some form of virtualization. They were either using VMware or they were using a cloud, so essentially the cost of making small clusters come down. There is not a lot of overhead. You can have a lot of clusters without having to dedicate the whole server into these clusters.

JC: Is there an opportunity then for the infrastructure provider, or the cloud provider, to add their own special sauce on top of Kubernetes?

Sheng Liang:  The cloud guys are all starting to do that. Over time, I think they will do more. Today is still early. Amazon, for instance, has not yet commercially launched the service to the public. And Digital Ocean just announced it. But Google has been offering Kubernetes as a service for three years. Microsoft has been doing it for probably over a year. If you look at Google's Kubernetes service, which is probably the most advanced, now includes more management dashboards and UIs, but nothing really fancy yet.

What I would expect them to do -- and this would be really great from my perspective -- is to bring their entire service suite, including their databases, AI and ML capabilities, and make them available inside of Kubernetes.

Shannon Williams: Yeah, they will want to integrate their entire cloud ecosystems. That's one of the appealing things about cloud providers offering Kubernetes -- there will be some level of standardization but they will have the opportunity to differentiate for local requirements and flavors.

That kind of leads to the challenge we are addressing.

There are three big things that most organizations face (1) you want to be able to run Kubernetes on-prem.  Some teams may run it on VMware, some may wish to run in on bare metal. They would like to be able to run it on-prem in a way that is reliable, consistent and supported. For IT groups, there is a growing requirement of offer Kubernetes as a service in the same way they offer VMs. To do so, they must standardize Kubernetes. (2) There is another desire to manage all of these clusters in a way that complies with your organization's policies. There will be questions like "how do I manage multiple clusters in a centralized way even if some are on-prem and some are in the cloud?"  This is a distro-level problem for Kubernetes. (3) Then there is a compliance and security concern with how to configure Kubernetes to enforce all of my access control policies, security policies, monitoring policies, etc.  Those are the challenges that we are taking on with Rancher 2.0

Jim Carroll, OND: Where does Rancher Labs fit in?

Shannon Williams, Rancher Labs: The challenge we are taking on is how to manage multiple Kubernetes clusters, including how to manage users and policies across multiple clusters in an organization.

Kubernetes is now available as a supported, enterprise-grade service for anybody in your company. At this scale, Kubernetes really becomes appealing to organizations as a standardization approach, not just so that workloads can easily move between places but so that workloads can be deployed to lots of places.  For instance, I might want some workloads to run on Alibaba Cloud for a project we are doing in China, or I might want to run some workloads on T-Systems's cloud for a project in Germany, where I have to comply with the new data privacy laws. I can now do those things with Kubernetes without having to understand the specific cloud parameters, benefits or limitations of any specific cloud. Kubernetes normalizes this experience. Rancher Labs makes it happen in a consistent way. That is a large part of what we are working on at Rancher Labs -- consistent distribution and consistent management of any cluster. We will manage the lifecycle of Amazon Kubernetes or Google Kubernetes, our Kubernetes, or new Kubernetes coming out of a dev lab.

JC: So the goal is to have the Rancher Labs experience running both on-prem and in the public cloud?

Shannon Williams, Rancher Labs:: Exactly. So think about it like this. We have a distro of Kubernetes and we can use it to implement Kubernetes for you on bare metal, or on VMware, or in the cloud, if you prefer, so you can build exactly the version of Kubernetes that suits you. That is the first piece of value -- we'll give you Kubernetes wherever you need it. The second piece is that we will manage all of the Kubernetes clusters for you, including where you requested Kubernetes from Amazon or Google. You have the options of consuming from the cloud as you wish or staying on-prem. There is one other piece that we are working on. It is one thing to provide this normalized service. The additional layer is about engaging users.

What you are seeing with Kubernetes is similar to the cloud. Early adopters move in quickly and have no hesitancy in consuming it -- but.they represent maybe 1% or 2% of the users.The challenge for the IT department is to make this preferred way to deliver resources. At this point, you want to encourage adoption and that means developing a positive experience.

JC: Is your goal to have all app developers aware of the Kubernetes layer? Or is Kubernetes management really the responsibility of the IT managers who thus far are also responsible for running the network, running the storage, running the firewalls..?

Shannon Williams, Rancher Labs: Great question, because Kubernetes is actually part of the infrastructure, but it is also part of the application resiliency layer. It deals with how an application handles a physical infrastructure failure, for example. Do I spin up another container? Do I wait to let a user decide what to do? How do I connect these parts of an application and how do I manage the secrets that are deployed around it? How do I perform system monitoring and alerting of application status? Kubernetes is blurring the line.

Sheng Liang, Rancher Labs: It is not really something the coders will be interested in. The interest in Kubernetes starts with DevOps and stops just before you get to storage and networking infrastructure management.

Shannon Williams, Rancher Labs: Kubernetes is becoming of interest to system architects -- the people who are designing how an application is going to be delivered. They are very aware that the app is going to be containerized and running in the cloud. The cloud-native architecture is pulling in developers. So I think it is a little more blurred than whether or not coders get to this level.

Sheng Liang, Rancher Labs: For instance, the Netflix guys used to talk a lot about how they developed applications. Most developers don't spend a lot of time worrying about how their applications are running. They have to spend most of their time worrying about the outcome. But they are highly aware of the architecture. Kubernetes is well regarded as the best way to develop such applications. Scalable, Resilient, Secure -- those are what's driving the acceptance of Kubernetes.

Shannon Williams, Rancher Labs:  I would add one more to the list -- quick to improve. There is a continuous pace of improvement with Kubernetes. I saw a great quote about containerization from a CIO, who said "I don't care about Docker or any other containers or Kubernetes. All I care about is continuous delivery. I care that we can improve our application continuously and it so happens that containers give us the best way to do that." The point is -- get more applications to your users in a safe, secure, and scalable process.

The Cloud-Native Computing Foundation (CNCF) aims to build next-generation systems that are more reliable, more secure, more scalable and Kubernetes is a big part of this effort.  That's why I've said the value of workload portability is often exaggerated.

Jim Carroll, OND:  Tell me about the Rancher Labs value proposition.

Shannon Williams, Rancher Labs: Our value proposition is centered on the idea that Kubernetes will become the common platform for cloud-native architecture. It is going to be really important for organizations to deliver that as a service reliably. It going to be really important for them to understand how to secure that and how to enforce company policies. Mostly, it will enable people to run their applications in a standardized way. That's our focus.

As an open source software company that means we build the tooling that thousands of companies are going to use to adopt Kubernetes. Rancher has 10,000 organizations using our platform today with our version 1.0 product. I expect our version 2.0 product to be even more popular because it is built around this exploding market for Kubernetes.

JC:  What is the customer profile? When does it make sense to go from Kubernetes to Kubernetes plus Rancher?

Shannon Williams, Rancher Labs: Anywhere where Kubernetes and containers are being adopted, really.  Our customers talk about the D-K-R stack:  Docker- Kubernetes-Rancher.

JC: Is there a particular threshold or requirement that drives the need for Rancher?

Shannon Williams, Rancher Labs:: Rancher is often something that users discover early in their exploration of Docker or Kubernetes.  Once they have a cluster deployed, they start to wonder how they are going to manage it on an on-going basis. This often occurs right at the beginning of a container deployment program - day 1, day 2 or day 3.

Like any other open source software companies, users can download our software for free. The point when a Rancher user becomes a Rancher customer usually happens when the deployment has moved to a mission-critical level.  When their business actually runs on the Kubernetes cluster, that's when we are asked to step in to provide support. We end up establishing a business relationship to support them with everything we build.

JC: And how does the business model work in a world of open source, container management? 

Shannon Williams, Rancher Labs: Customers purchase support subscriptions on an annual basis.

JC: Are you charging based on the number of clusters or nodes? 

Shannon Williams, Rancher Labs: Yes, based on the number or clusters and hosts. A team that is running their critical business systems on Kubernetes will get a lot of benefits in knowing that everything from the lowest level up, including the container runtime, the Kubernetes engine, the management platform, logging, monitoring  -- we provide that unified support.

JC: Does support mean that you actually run the clusters on behalf of the clients? 

Shannon Williams, Rancher Labs: Well, no, they're running it on their systems or in the cloud. Like other open source software developers, we can provide incident response for issues like "why is this running differently in Amazon than on-prem?" We also provide training for their teams and collaboration on the technology evolution.

JC: What about the company itself. What are the big milestones for Rancher Labs?

Shannon Williams, Rancher Labs: We're growing really fast and now have about 85 employees around the world. We have offices around the world, including in Australia, Japan, the UK and are expanding. We have about 170 customer accounts worldwide. We have over 10,000 organizations using the product and over 4 million downloads to date.  The big goals are rolling out Version 2.0, which is now in commercial release, and driving adoption of Kubernetes across the board. We're hoping to get lots of feedback as version 2.0 gets rolled out. So much of the opportunity now concerns the workload management layer.  How do we make it easier for customers to deploy containerized applications? How can we smoothe the rollout of containerized databases in a Kubernetes world? How do we solve the storage portability challenge? There are enormous opportunities to innovate in these areas. It is really exciting.

JC: What is needed to scale your company to the next level?

Shannon Williams, Rancher Labs: Right now we are in a good spot. We benefit from the magic of open source. We were able to grow this fast just on our Series B funding round because thousands of people downloaded our software and loved it. This has given us inroads with companies that often are the biggest in their industries. Lot's of the Fortune 500 are now using Rancher to run critical business functions for their teams. We get to work with the most innovative parts of most organizations.

Sheng Liang, Rancher Labs: There is a lot of excitement. We just have to make sure that we keep our quality high and that we make our customers successful. I feel the market is still in its early days. There is a lot more work to make Kubernetes really the next big thing.

Shannon Williams, Rancher Labs: We're still a tiny minority inside of IT. It will be a ten-year journey but the pieces are coming together.