Sunday, September 23, 2012

The Case for the Big GEs... and IP-over-DWDM

by Sultan Dawood, Solutions Marketing Manager, SP Marketing Cisco Systems

No one knows better than the readers of this publication about the importance of Gigabit networks. The emergence of 10 Gigabit (10 G) links in big networks began more than 10 years ago, and seemed like enough capacity, for a long time.

But that was then. In today’s optical conversations, talk tends to center on 40/100 Gig links, all the way up to Terabit advancements. Why? The volume of consumer and business usage of bandwidth is astounding, on fixed and mobile networks. Upwards of 50% CAGR per year, on some portions of the “big” Internet, like last-mile access networks.

If history is any indicator, doomed is the man or woman who publicly wonders why on earth so much capacity is needed. In the 1960s, cable television providers wondered why they’d ever need to build for more than 12 (analog!) channels. Back in the early days of dial-up data connections, some wondered why we’d ever need to go beyond 56 kbps. We’ve seen this “I’ll eat my hat” scenario over and over, in the course of network expansion.

Because the majority of today’s transport networks are conveying data using 10 Gig networks, and at the same time are facing unprecedented volumes of usage, decisions about expansion tend to center on three known options:

1) Add more 10 Gig links
2) Go straight to 100 Gig
3) Find a stepping stone path to 100 Gig via 40 Gig

What is perhaps lesser known are the decision sets and resultant economic impacts of getting to 40/100 GigE, using existing routers, minus the provisioning and maintenance of electrical-to-optical-to-electrical conversion transponders, minus the operational expenses involved with maintaining what is essentially two disparate transport networks.

If you remember one thing about this article, please make it be this: by converging the optical and IP layers of the network, capex and opex costs can be trimmed by 25-30%, according to our ongoing and live research with service providers. Path identification (traditionally handled within the “transport silo”) happens much more quickly; apps and services (handled within the “data services silo”) move more securely.

Consider: What if you could turn up a link to a customer in minutes -- not months?

IP over DWDM is an innovative option (we’d argue the option) that economically justifies 40/100 Gig adoption, by reducing additional unnecessary equipment and associated interfaces including optics – thus lowering requirements for additional power, cooling and space. It’s been proven that integrating optical intelligence into a router makes it cognizant of any optical path degradation. That means routers can proactively ensure that any apps and services in transit are protected from degradation or failure.

Why: The forward error correction (FEC) intelligence as a result of integration of optical into the routers will provide awareness to automatically switch to a secondary, safer path, before any optical impairments impact any app or service performance.

So we won’t venture into questions of whether 40/100 Gig networks are necessary. Instead we’ll look at what’s driving the world’s data capacity needs, then examine the options in getting into “the big GEs,” including the substantial economic benefits associated with converging the optical and IP layers.

The Capacity Drivers

At least three factors are driving the world’s explicit and implicit obsession with network capacity: Device proliferation, video as an app, and the data centers fueling cloud computing.

Think about the number of IP-connectable devices in your home or business, 10 years ago, compared to now. All of them want Internet connectivity – some more so than others.

Plus, most gadgets in the device ecosystem are mobile. Not long ago, we connected to the Internet when we went to the office, or were at home. Fixed connections – personal computers, tower computers, laptops to some extent. Internet wasn’t an option when outdoors, or when driving, to navigate via GPS, or find a restaurant, or locate friends.

Our ongoing VNI (Visual Networking Index) research indicates that by 2016, there will exist nearly 19 billion global network connections – enough for 2.5 per person, in the world. (Click here for more VNI information: http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-481360_ns827_Networking_Solutions_White_Paper.html)

Capacity Driver #2: Video

Driver number two dovetails with the first one: Video. With more and more powerful, HD video-capable screens fetching and tossing big streams of data in and out of whomever’s data cloud, the question of how and when to scale the network is more relevant than ever.

Beyond the 50+% compound annual growth in broadband usage – wired and wireless – new pressure points are arriving into the consumer and business marketplaces with alarming regularity.

Consider the spate of recent announcements from consumer electronics and PC makers about putting high-resolution screens into handhelds, tablets, laptops and televisions. High-rez screens means high-rez streams.

Indeed, smart phones and tablets impact real-time network capacity in a big way, because most include still and video cameras, capturing images and sound in SD and HD. Video eats up capacity like nothing else (so far.) Already, and again according to our ongoing VNI research, more streamed video is done in HD than SD.

At the high end of the video spectrum, the 2012 trade show scene is producing regular headlines about the pursuit of 4K resolution.

Even using the best compression on the market today (which goes variously by H.264, AVC and MPEG-4), a 4K stream “weighs” as much as 17 Mbps. Compare that to today’s 1-3 Mbps carrying capacity, for “regular” HD video compressed with H.264/AVC.

Yes, H.265 compression is on the way, which will do for H.264 what it did for MPEG-2 – but still. The point is that network bandwidth is under enormous strain right now, with no signs of easing up.

Capacity Driver #3: Clouds and Data Centers

Consider: Networks used to move static web pages, or haul 64 kbps telephone conversations, or broadcast (in a one-to-many sense) SD video. These days, they do all that plus stream HD, (unicast and multicast) video to high resolution displays. They haul video phone conversations. They carry adaptive bit rate video, which by its nature behaves like a gas, filling all available space.

Plus, big networks traditionally were “silo’ed,” with transport and data departments and people operating largely independent of one another. Not so anymore. Why: The fetching of a web page is one thing – simple, isochronous, not a huge strain on the network.

Today’s emerging applications are another story entirely, segueing into transport-heavy fare like the shipping or storing of enormous amounts of digital stuff – think digital pictures, videos, and cloud-based storage in general . Transport-heavy networks needs require mutual and simultaneous attention, from both “data” and “path/transport” departments in the organization.

“The cloud,” in all its iterations, then, is capacity driver number 3. Clouds, and the data centers that enable them, are both sourcing more network traffic -- and struggling under its weight. Anyone building a cloud designed to service big geographic areas will need multiple data centers that are interconnected – preferably intelligently.

Today’s data centers are connected via a combination of routers and transport networks. Connecting Router A in Data Center A to Router B in Data Center B, for instance, requires transport infrastructure. Traditionally and so far, that transport has been 10 Gig.

However, as bandwidth demands increase, with routers leveraging 100 Gigabit Ethernet interfaces and data centers moving large volumes of content, it would make sense to increase the transport network capacity and transition from 10G to 100G DWDM. To add further to that thought, it is also possible to integrate optical interfaces directly into routers, thus offering an innovative and green approach as well as a truly integrated solution that in turn justifies a faster and more cohesive transition to 40/100 G links.

The Meaning and Importance of Coherence in Optical Transmission Systems

In optical terms, “coherence” refers to the ability of a lightwave to produce interference patterns that work in favor of the intended signal. Coherent optical communications products arrived in the marketplace roughly coincident with 40/100 Gig networks, because they are intrinsically suited for very long-haul networks – upwards of 500 km and higher.

Any time signals are distributed over very long distances, however, two things can happen that compromise performance. First is plant anomolies, which cause signal strength to lag. That necessitates amplification, but amplifiers boost both the intended signal, and any noise that is present. Dispersion compensation is then required, to compensate for impairments.

These impairment compensation activities do not come for free – especially when the distance in question is measured in thousands of kilometers. That’s why service providers considering the shift to 40/100G seek ways to do so without adding additional equipment for signal impairment compensation.

Service providers seek the most economical, yet best performing signals that can exist on their network as a way to control total cost of ownership, by adding 40/100 Gig capabilities to existing routers and over existing infrastructure – even if the fiber plant is marginal in places.

This is where coherent optical systems really shine. We’ve seen (because we built) 100 Gbps connectivity over a 3,000 km link, on top of existing 10 Gbps fiber infrastructure, that is vendor-independent. (More here: http://bit.ly/PndKol)

Here’s a real-world example: You’re running a video connection to a customer. It’s an MPLS tunnel mapped onto an optical wavelength. Let’s say that fiber degrades. With the forward error correction (FEC) techniques within IP-DWDM, thresholds can be set ahead of time, to default to another optical path.

Maybe the pre-FEC value is 10-17 , but at 10-19 the router knows to switch the video connection to a cleaner path – proactively. Having ways to set thresholds and interact between layers ensures that the video connection stays solid – and your customer has no idea that a problem almost occurred.

Now What? Harmonizing Optical and Packet Transport

We’ve talked about capacity drivers, and the benefits of IP-DWDM as a way to get to 40/100 Gig- without stranding prior investments in routers or optics. The fact is that the dominant type of traffic on broadband networks today is packet-based; existing optical networks aren’t as well suited for packet-based delivery than other types of traffic.

This is how IP-DWDM began, for what it’s worth. Service providers asked how to bulk up capacity without disrupting capital or operational spending. How to save money in a packet environment led to the need to do certain things differently, which led to the development of IP-DWDM.

Because of that, the drumbeat toward converging the optical and IP domains began, as a way to reduce capital and operational costs, as well as to have a better handle on network controls. Equally relevant: The ability to launch new services/apps more quickly, and more securely. IP over DWDM is one example of this convergence, which provides between 25-35% in capital and opex spending, combined.

Perhaps some day it will seem quaint, that at one time network architects were debating the convergence of the optical and IP layers in long-haul transport. But for now, the decision to go with IP-DWDM is still a bit maverick, for those going through it.

Why? Because getting there involves cutting across people and organizational domains. Never easy to do. Despite ongoing proof that a) 100 Gig gear exists that works over distances of 3,000 km, without need for signal compensation, and b) IP-DWDM is a cleaner solution, because it eliminates excess optical equipment and interfaces, and c) its pre-FEC can proactively re-route mission-critical data before signal paths become impaired, IP-DWDM is still in the “crawl” portion of any “crawl-walk-run” technological evolution.

We’ll end with this: Service providers will continue to sprint to keep up with capacity, and to compete with new, over-the-top providers. A more integrated and converged network lends itself better to the packet-based traffic loads of today. It scales for the future, and it saves capital and operational costs. Because one thing is certain: Data loads are not letting up anytime soon.


Sultan Dawood holds the position of Senior Marketing Manager for Core Routing and Transport Solutions at Cisco Systems. He has spent the last 18 years of his career focused on data networking and telecommunication systems working closely with both Enterprise and Service Provider customers. Prior to Cisco, Sultan held senior marketing and engineering positions at Hammerhead Systems, Motorola, 3COM, ADC Telecommunications and Litton Systems.

Sultan has a Bachelor of Science degree in Electrical Engineering from Old Dominion University in Norfolk, Virginia. He is also a Board member and the Vice President of Marketing for the Broadband Forum.

Researchers Demonstrate 1,000 Terabits per Second over 50km of Fiber


Researchers from NTT, Fujikura, Hokkaido University and Technical University of Denmark demonstrated ultra-large capacity transmission at 1 petabit (1000 terabits) per second over a 52.4 km length of 12-core (light paths) optical fiber -- a new record for transmission over a single strand of fiber. One petabit per second would carry 5,000 HDTV movies of two hours in a single second.

NTT said the breakthrough leverages spatial multiplexing optical communications and new  multicore optical fiber (MCF).  The two companies and two universities combined their expertise to develop multicore optical fiber designs, fabrication techniques, and spectrally-efficient transmission technologies to carry out this experiment.

The experimental system used a new 12-core MCF structure with the cores arranged in a nearly concentric pattern.  A novel fan-in fan-out device employed a digital coherent optical transmission scheme for transmitting DWDM signals in each core. The researchers said the 12-core MCF reduced signal leakage (crosstalk) between adjacent cores, which had been a problem with conventional MCF designs. The systems achieved a transmission capacity of 84.5 terabits per second for each core (= 380 Gbps capacity per wavelength X 222 wavelength channels), for a total capacity of 1.01 petabit (= 12 X 84.5 terabit) per second for the 12-core optical MCF through 52.4 km of fiber.

The result was reported in a postdeadline paper at the European Conference and Exhibition on Optical Communications (ECOC 2012).

http://www.ntt.co.jp/news2012/1209e/120920a.html

Dubai Builds UAE-IX Internet Exchange Modeled on Frankfurt's DE-CIX


Frankfurt/Main and Dubai, 18 September 2012 - October 1, 2012 marks the start of

DE-CIX, Frankfurt's massive Internet Exchang, has provided know-how and support for UAE-IX, an Internet exchange in Dubai, United Arab Emirates (UAE).

UAE-IX is a neutral Internet traffic exchange platform that interconnects global networks and, above all, network operators and content providers in the Gulf region. UAE-IX is using a fully redundant switching platform located in a neutral secure datacenter in Dubai. The new Internet Exchange will reduce latency times by up to 80 per cent and costs by up to 70 per cent for Gulf providers.

The companies noted that many Internet service providers in the region have had to exchange their traffic via Europe, Asia or North America, leading to high latency rates. Initiated by the UAE’s Telecommunication Regulatory Authority (TRA) and supported by DE-CIX, UAE-IX delivers a highly available local alternative for regional traffic exchange, localizing Internet content.

“Across continents, data traffic is on the rise,” says Harald A. Summa, Managing Director of DE-CIX Management GmbH in Frankfurt. “The Internet’s global infrastructure must grow with it so that data travel shorter distances to get to users. As the operator of the largest Internet exchange in the world, we have drawn on our long-standing expertise to help design UAE-IX. UAE-IX will turn the GCC into an independent international hub for the digital economy and will no doubt attract Internet service providers from Europe, Africa and even India, Pakistan and China.”

www.uae-ix.net 
http://www.de-cix.net

Frankfurt's DE-CIX Internet Exchange Hits 2 Tbps Peak


DE-CIX, the Internet exchange located in Frankfurt am Main (Germany), hit a new data throughput record last week  as Internet traffic across its switching fabric exceeded the 2 Tbps (terabits per second) mark for the first time.

DE-CIX currently servers over 480 Internet service providers from over 50 countries.  At DE-CIX, more than 12 petabytes of data are exchanged per day.

"Although the traffic peak of over 2 Tbps marks a new high,” says Harald A. Summa, CEO at DE-CIX Management GmbH, “we do not see an end to data traffic growth on the horizon. We assume that Internet traffic will continue to grow by about 80 per cent per year in the future”. At DE-CIX, HD-TV, video and multimedia content, online gaming and cloud computing are considered the main drivers behind the continuing increase in data traffic."

The switching fabric of DE-CIX has the potential to scale to 40 Tbps, according to Arnold Nipper, Technical Manager at DE-CIX. "The DE-CIX peering infrastructurehas a star-shaped topology and is spread out over a total of twelve data centers operated by different providers in the Frankfurt metropolitan area.  The center of the DE-CIX peering star is composed of two redundant core switch clusters, one active and the other in hot standby mode.  If there are any problems with the operative switch cluster, data traffic is immediately and automatically, in other words within milliseconds, routed to the other switch cluster so that data streams can flow continually without interruption.  The central core switch clusters are redundantly connected to 14 other switches which are in turn connected to the ISPs."

http://www.de-cix.net/about/statistics/


  • Equipment deployed in the DE-CIX distributed fabric includes Force10 Networks' Terascale platform.