Thursday, June 18, 2015

#ONS2015 - A Look Inside Google's Data Center Network

Networking is at an inflection point in driving next-gen computing architecture, said Amin Vahdat, Senior Fellow and Technical Lead for Networking at Google, in a keynote address at the Open Networking Summit in Santa Clara, California. Creating great computers will largely be determined by the network.

In constructing its "Jupiter" fifth-generation data centers, Google is essentially the bandwidth equivalent of the Internet under one roof.

Some key takeaways from the presentation:
  • Google will open source its gRPC load-balance and app flow-control code 
  • Google's B4 software-defined WAN links its global data centers and is bigger than its public-facing network
  • Andromeda Network Virtualization continues to advance as a means to slice the physical network into isolated, high-performance components
  • Google is deploying its "Jupiter" fifth-generation data center architecture.  Traditional designs and data center switches simply cannot keep up and require individual management, so Google decided to build its own gear.
  • Three principles in Google's data center network are: Clos Topologies, Merchant Silicon, and Centralized Control. Everything is designed for scale-out.
  • Load balancing is essential to ensure that resources are available and to manage cost.
  • Looking forward, a data center network may have 50,000 servers, each with 64 CPU cores, access to PBs of fast Flash storage, and equipped with 100G NICs.  This implies the need for a 5 Pb/s network core switch -- more than the Internet today!

The #ONS2015 keynote can be seen here:
https://youtu.be/FaAZAII2x0w



#ONS2015 - Microsoft Azure Puts SDN at Center of its Hyperscale Cloud

To handle its hyperscale growth, Microsoft Azure must integrate the latest compute and storage technologies into a truly software-defined infrastructure, said Mark Russinovich, Chief Technology Officer of Microsoft Azure in a keynote presentation at the Open Networking Summit in Santa Clara, California.

The talk covered how Microsoft is building its hyperscale SDN, including its own scalable controllers and hardware-accelerated hosts.


 Microsoft is making a massive bet on Azure.  It is the company's own infrastructure as well the basis for many of its products going forward, including Office 365, Xbox and Skype.

Some highlights:
  • Microsoft Azure's customer facing offering include App Services, Data Services and Infrastructure Services
  • Over 500 new features were added to Azure in the past year, including better VMs, virtual networks and storage.
  • Microsoft is opening new data centers all over the world
  • Azure is running millions of compute instances
  • There are now more than 20 ExpressRoute locations for direct connect to Azure.  
  • Azure connects with 1,600 peered networks through 85 IXPs
  • One out of 5 VMs running on Azure is a Linux VM
  • A key principle for Microsoft's Hyperscale SDN is to push as much of the logic processing down to the servers (hosts)
  • Hyperscale controllers must be able to handle 500K+ server (hosts) in a region
  • The controller must be able to scale down to smaller data centers as well
  • Microsoft Azure Service Fabric is a platform for micro-service-based applications
  • Microsoft has released a developer SDK for its Service Fabric
  • Azure is using a Virtual Filtering Platform (VFP) to act as a virtual switch inside Hyper-V VMSwitch.  This provides core SDN functionality for Azure networking services. It uses programmable rule/flow tables to perform per-packet actions. This will also be extended to Windows Server 2016 for private clouds.
  • Azure will implement RDMA for very high performance memory transport between servers. It will be enabled at 40GbE for Azure Storage.  All the logic is in the server.
  • Server interface speeds are increasing: 10G to 40G to 50G and eventually to 100G
  • Microsoft is deploying FPGA-based Azure SmartNICs in its servers to offload SDN functions from the CPU. The SmartNICs can also perform crypto, QoS and storage acceleration.

The #ONS2015 keynote can be seen here:
https://youtu.be/RffHFIhg5Sc



#ONS2015: AT&T Envisions its Future as a Software Company

Over the new few years, AT&T plans to virtualize and control more than 75% of its network functions via its new Domain 2.0 infrastructure.  The first 5% will be complete by the end of this year, laying the foundation for an accelerated rollout in 2016.

In a keynote at the Open Networking Summit 2015 in Santa Clara entitled "AT&T's Case for a Software-Centric Network", John Donovan provided an update on the company's Domain 2.0 campaign, saying this strategic undertaking is really about changing all aspects of how AT&T does business.

Donovan, who is responsible for almost all aspects of AT&T's IT and network infrastructure, said AT&T is deeply committed to open source software, including contributing back to open source communities. The goal is to "software-accelerate" AT&T's network.  In the process, AT&T itself becomes a software company.

Here are some key takeaways from the presentation:


  • Since 2007, AT&T has seen a 100,000% increase in mobile data traffic
  • Video represents the majority of traffic on the mobile network
  • Ethernet ports have grown 1,300% since 2010
  • AT&T's network vision is rooted in SDN and NFV
  • The first phase is about Virtualizing Functions.
  • AT&T's Network On-Demand service is its first SDN application to reach customers. It went from concept to trials in six months.
  • The second phase is about Disaggregation.
  • The initial target of disaggregation is the GPON Optical Line Terminals (OLTs), which are deployed in central offices for supporting its GigaPower residential broadband service.  AT&T will virtualize the physical equipment using less expensive hardware.  The company will release an open specification for these boxes.
  • AT&T will contribute its YANG custom design tool to the open source community.
  • AT&T is leading a Central Office Re-architected as Data Center (CORD) project.


http://www.att.com
http://opennetsummit.org/

The ONS2015 keynote can be seen here:
https://youtu.be/7gEvIHCps1Q


Blueprint: Open Standards Do Not Have to Be Open Source

by Frank Yue, Senior Technical Marketing Manager, F5 Networks

Network Functions Virtualization (NFV) is driving much of the innovation and work within the service provider community. The concept of bringing the benefits of cloud-like technologies is driving service providers to radically alter how they architect and manage the services they deliver through their networks.

Different components from different vendors based on different technologies are required to create an NFV architecture. There are COTS servers, hypervisor management technologies, SDN and traditional networking solutions, management and orchestration products, and many distinct virtual network functions (VNFs). All of these components need to communicate with each other in a defined and consistent manner for the NFV ecosystem to succeed.


Source: Network Functions Virtualization (NFV); Architectural Framework

While ETSI has defined the labels for the interfaces between the various components of the NFV architecture, there are currently no agreed-upon standards. And although there are several open source projects to develop standards for these NFV interfaces, most have not matured to the point where they are ready for use in a carrier-grade network.

Are Pre-standards Solutions Premature?

In the meantime, various multi-vendor alliances are developing their own pre-standards solutions. Some are proprietary and others are derivations of the work done by open source groups. Currently, almost all of the proof of concept (POC) trials today are using these pre-standard variations. Each multi-vendor alliance is working in conjunction with the service providers to develop interface models and specifications that everyone within each POC will be comfortable with.

It is possible and even likely that some of these pre-standards will become de facto standards based on their popularity and utility. There is nothing wrong with standards that are developed by the vendor or service provider community as long as they meet these criteria: 1) the standard must work in a multi-vendor environment since the NFV architecture model depends on multiple vendors delivering different components of the solution. 2) The standard needs to be published and open so that a new vendor can easily build its component to be compatible with the architecture.

Looking at the first of these points, the nature of the NFV architecture is to be an interactive and interdependent ecosystem of components and functions. It is unlikely that all of the pieces of the NFV ecosystem will be produced and delivered by a single vendor. In a mature NFV environment, many vendors will be involved. One multi-vendor NFV alliance currently has over 50 members. Another alliance has designed an NFV POC requiring the involvement of nine distinct vendors.

This multi-vendor landscape drives the need for the second point, for the standard to be published and open. No matter what interface model is developed by each vendor and alliance, it still needs to be published in an open form, allowing other vendors to create models to integrate their solutions into the NFV architecture. It is likely that in the mature NFV ecosystem, some components will be delivered by vendors that are not part of the majority alliance that delivered the NFV solution.

No two service provider networks are alike, and there are close to an infinite number of combinations of manufacturers and technologies that can be incorporated into each service provider’s NFV model.  Service providers will require all of the components in the network to interact in a relatively seamless fashion. This can only be accomplished if the interface pre-standards are open and available to the technology community at large.

Proprietary, but Open?

A proprietary, but open standard is one that has been developed without community consensus. While the standard has been developed by a vendor or alliance of vendors, the model is published to allow anybody interested in developing solutions to incorporate the standard without the need for licensing, partnership, or agreement in general.

Proprietary, but open standards can be developed by a single entity or a small community working together towards a common goal. This gives these proprietary standards some advantages. 1) They can be created quickly since universal consortium acceptance may not be required. 2) They can be adapted and adjusted quickly to meet the changing and evolving nature of NFV architectures.

While open source projects and products have the benefit of being available to everyone, there are some tradeoffs for the design of technologies by open committee. Open source projects are always in flux as multiple perspectives and methodologies are competing for a universal consensus. This is especially true when working with standards developing organizations (SDOs). Because of this, standards often take years, instead of months, to develop.

In the meantime, the current NFV alliances can develop interface models that are successful in the limited environment of the alliance ecosystem. This rapid development also allows for the tuning of these interfaces as NFV architectures develop and mature. These proprietary, but open, models can be used as a template within the SDOs to develop a standard that has the benefit of being tested and proven in real-world scenarios.

No Model is Perfect

Ultimately, the standards that are developed will probably be a mixture of open source solutions with customized enhancements and open proprietary standards developed by these alliances. It is likely that individual vendors and alliances will enhance the final standards, adding their unique value to improve functionality and differentiate their solution.

In an ideal world, standards are fixed in nature and in time, but networks are evolving and technologies like NFV continue to evolve and mature. In this world of dynamic architectures, it is essential to have standards that are dynamic and proprietary, but open. This type of standard offers a solution that can deliver functions today and adapt to the models of tomorrow.

About the Author

Frank Yue is the Senior Technical Marketing Manager for the Service Provider business at F5 Networks. In this role, Yue is responsible for evangelizing F5’s technologies and products before they come to market. Prior to joining F5, Yue was sales engineer at BreakingPoint Systems, selling application aware traffic and security simulation solutions for the service provider market. Yue also worked at Cloudshield Technologies supporting customized DPI solutions, and at Foundry Networks as a global overlay for the ServerIron application delivery controller and traffic management product line. Yue has a degree in Biology from the University of Pennsylvania.

About F5

F5 (NASDAQ: FFIV) provides solutions for an application world. F5 helps organizations seamlessly scale cloud, data center, telecommunications, and software defined networking (SDN) deployments to successfully deliver applications and services to anyone, anywhere, at any time. F5 solutions broaden the reach of IT through an open, extensible framework and a rich partner ecosystem of leading technology and orchestration vendors. This approach lets customers pursue the infrastructure model that best fits their needs over time. The world’s largest businesses, service providers, government entities, and consumer brands rely on F5 to stay ahead of cloud, security, and mobility trends. For more information, go to f5.com.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Cisco's David Ward on Open Source Development

Open networking can only evolve with the support of a community of developers, says David Ward, CTO of Engineering and Chief Architect of Cisco. But you can't just launch a developer community, you have to build it. Open Source communities have now emerged.

Cisco is working on many open networking fronts, including OpenDaylight, OPNFV, ONOS and OpenStack.  In this video, Ward also highlights NETCONf and YANG, two standards seen as keys for infrastructure programmability.

http://open.convergedigest.com/2015/04/ciscos-david-ward-on-evolution-of-open.html

Nuage's Houman Modarres on the Value of Open

The move toward open networks in unstoppable, says Houman Modarres, VP of Marketing at Nuage Networks. The attraction of open networks is undeniable. Who would want a stiff, inflexible, vertically-integrated solution, when the Internet has already shown that creativity coming from different parts of the user community and application ecosystem is the right answer.

The crux is this:  with freedom of choice comes complexity.  Nuage, a business unit of Alcatel-Lucent, is working to address this challenge by supporting a variety of deployment models.

http://open.convergedigest.com/2015/04/nuages-houman-modarres-on-value-of-open.html

Intel's John Healy on the End Goal of Open

The end goal of open networking is to have a scalable infrastructure that is also lower cost to manage, says John Healy, General Manager of the SDN Division at Intel.

Open means more open in terms of standards and vendors. It also means being capable of working with the open source community.

http://open.convergedigest.com/2015/04/intels-john-healy-on-evolution-of-open.html

Novatel Wireless to Acquire DigiCore for M2M and Telematics

Novatel Wireless agreed to acquire DigiCore, a provider of advanced machine-to-machine (M2M) communication and telematics solutions based in South Africa, for approximately US$87 million.

The companies have been working together to commercialize a comprehensive end-to-end global service Software-as-a-Service (SaaS) platform.

This combines Novatel Wireless' hardware with DigiCore's Ctrack, a global telematics SaaS offering for the fleet management, user-based insurance, and asset tracking and monitoring markets.

"This combination is the result of a long-standing partnership between the two companies," said Alex Mashinsky, CEO of Novatel Wireless. "As a result of this relationship, we've already gone through an arduous process of integrating Novatel Wireless hardware into DigiCore's SaaS platform to create the industry's most complete IoT stack. These efforts are now bearing fruit as our successful joint venture has validated that the market demands a true end-to-end solution comprised of a comprehensive hardware portfolio, platform and cloud services, and integration and support."

Novatel Wireless said the merger will give it a foundation for developing and marketing comprehensive solutions for the commercial telematics industry. The collective vision is to simplify the delivery of telematics solutions from device deployment to big data collection, analytics, and reporting.

http://investor.novatelwireless.com/releasedetail.cfm?ReleaseID=918626
http://www.ctrack.com/

Dell to Resell NEC's SDN ProgrammableFlow Controller

NEC and Dell announced a new distribution agreement that will allow Dell to resell the NEC ProgrammableFlow Controller Software as one of the software options sold with Dell’s networking hardware. Dell’s S4810, S4820, S5000 and S6000 series of switches running Dell OS9 have been verified compatible with the NEC ProgrammableFlow Controller version 6, with additional validation of Dell switches planned.

“Unlocking the network from the tight coupling of hardware and software opens more customer choice to achieve better service levels at lower costs,” said Arpit Joshipura, vice president, Dell Enterprise Networking & NFV. “We are excited to work with NEC to address the market demand for automation and open standards.”

http://www.necam.com/sdn
http//www.dell.com

Wind River Opens App Store for VxWorks RTOS

Wind River is opening an app store for its VxWorks real-time operating system (RTOS). Wind River Marketplace helps customers find and evaluate best-of-breed add-on solutions from the Wind River partner ecosystem in order to enhance their VxWorks deployment.

The apps in the store are tested and validated by Wind River for seamless interoperability with VxWorks.  Target categories range from safety, security, and storage to connectivity, graphics, and development tools.

“With the synergy and validation already achieved for Wind River Marketplace products, customers get almost immediate access to best-in-class embedded technologies to easily enhance the operating environment and accelerate market deployment,” said Dinyar Dastoor, vice president and general manager of operating system platforms at Wind River.

http://tinyurl.com/WRMarketplace