Sunday, April 21, 2013

Reporter Notes from ONS 2013: NTT Com, Internet 2, Google

By James E. Carroll, Editor
NTT Communications was among the first Service Providers to see the potential of OpenFlow and SDN for transforming its operations, said Yukio Ito, Senior VP Service Infrastructure, NTT Communications.  Some notes from his keynote at the Open Networking Summit 2013 in Santa Clara, California:

  • NTT Com's reasons for pursuing SDN include shorter time to market, service differentiation, and reduced CAPEX/OPEX.
  • NTT Communications has a Global Cloud Vision encompassing many of its enterprise services and all self-managed under an integrated cloud portal.
  • The company launched its SDN-enabled Enterprise Service in 2012, including the self-provisioning portal website.
  • OpenFlow is being used for inter-data center back-ups between NTT Communications' Global Data Centers.  The service allows bandwidth can be boosted on-demand using the OpenFlow controller.
  • The results of using OpenFlow/SDN for the Enterprise Cloud service have been good, including better service automation, a topology-free design, an the overcoming the 4K VLAN limitation that the network would otherwise face.  There have been some issues.  The OpenFlow v1.0 specification did not meet requirements for redundancy, current silicon has meant a "flow table shortage", and the network has generally been less programmable than NTT Communications expects.  The company is working to overcome these issues.
  • NTT Com is working on its SDN architecture that will provide a common framework for northbound and southbound interfaces.  The company will use vendors and/or open source if they meet its criteria.
  • NTT Com is very interested in extending SDN to the optical transport layer. The ONF's Optical Transport WG is expected to accelerate this discussion.
  • One additional challenge is that the interconnection between a data center network and an MPLS-VPN is not currently automated.  The company is developing a "Big Boss" SDN controller to address this challenge.

INTERNET2

The experimentation with Software Defined Networking underway in Internet2 in many ways parallels the birth of the commercial Internet, said Dave Lambert, President and CEO of Internet2.  Many U.S. companies, in fact, have their roots in academia, such as Cisco (Stanford), Sun (Berkeley and Stanford), Google (Stanford), Arbor ( U.of Michigan), Akamai (MIT), etc.

Some notes from his presentation:

  • It's time for a change. Most of the network paradigm was created 35 to 40 years ago, when Ethernet and IP emerged despite strong technical objections.
  • Will we fight re-centralization of an open control plane and hybridization to a potentially post-IP, SDN-based packet environment? This is like the packet-circuit debate with IBM's SNA group back in the day.
  • Getting bandwidth limitations out of the way for the academic community is a key objective.
  • Bandwidth and openness are imagination enablers.  An open networking stack is risky but is among the most exciting things.
  • Data intensive science in genomics and physics really do demand flexibility to handle massive data flows.
  • 29 major universities are committed to the Innovation Platform Program.  This entails (1) 100 GigE connectivity to their campus and across their campus (2) support and access Intenet2's Layer 2 OpenFlow-based service (3) Invest in developing applications that run across this network.
  • The U.S. academic is admittedly on the cutting edge. 

SDN @ GOOGLE

Google's software defined WAN, which is the basis its internal network between data centers, is real, it works, and has met the company's expectations in terms of scalability and reliability, said Amin Vahdat, Distinguished Engineer at Google.  Some notes from his presentation:

  • It's been a year since Google announced that its internal backbone had been migrated to SDN.
  • Growth in bandwidth continues unabated.   Google's internal backbone actually carries more traffic than its public-facing network.
  • Planning, building and provisioning bandwidth at Google scale had been a major headache, hence the interest in SDN. Over-provisioning costs were also a major driver to adopt SDN.  Slow convergence time in the event on an outage was another factor.
  • Google wanted to go with  logically centralized network control instead of the decentralized paradigm of the Internet.  This centralized approach leads to a network that is more deterministic, more efficient and more fault tolerant, according to Vahdat.
  • B4 is the name of Google's software defined WAN.  Vahdat describes it as a warehouse-scale-computer (WSC) network.  It links data centers around the world  (a map shows 12 nodes across Asia, North America and Europe) . So far this network is successful, so the next step may be to run some Internet user-facing traffic across this same backbone.
  • The B4 network runs OpenFlow. Google built its own network hardware using merchant silicon. The are 100s of ports of non-blocking 10GE.
  • Google uses Quagga for BGP and IS-IS.  A hybrid SDN architecture is used to bridge sites that are fully under SDN control and legacy sites. This means that SDN can be deployed incrementally.  You don't have it deploy it everyone on Day 1.
  • Traffic engineering is the first application on the SDN WAN. This was implemented about a year ago.  It takes into account current network demand and application priority.
  • Google has been adding capabilities pretty quickly through frequent software releases. 



http://www.opennetsummit.org