Wednesday, June 21, 2017

Switch enters rapid growth phase for its SuperNAP data centres

Switch is the operation behind the massive SuperNAP in Las Vegas, also known by superlatives such as 'world’s densest data centre' or the first 'elite' data centre capable of exceeding Tier IV classification by the Uptime Institute. Switch currently has about 1.8 million sq feet of colocation data centre space powered up in Las Vegas, with plans to add a further 854,000 sq feet of space in this same market. Switch has also kicked off construction of a multi-billion dollar data centre campus in Reno, Nevada, as well as another marquee data centre in Grand Rapids, Michigan. An international expansion is also underway with its first data centre in Europe (Siziano, Italy) and Asia (Chonburi, Thailand). Last week, Switch unveiled its latest ambition - a data centre campus spanning more than one million sq feet in Atlanta.

Switch is privately-held company founded in 2000 by Rob Roy, a young entrepreneur who seized upon the idea that the world's leading corporations and telecom operators would benefit from highly-secure, scalable and energy-efficient colocation space where their systems could be in close physical proximity to many other like-minded carriers and corporations. Many others had this same idea at the turn of the millennium and thus we had the birth of top data centre operators whose names are still recognised today (Equinix CoreSite, Telecity), along with others that have since disappeared.

The company really got started by acquiring an Enron Broadband Services building located on Las Vegas' east Sahara Boulevard that provided access to long-haul fibre routes from the national network operators. This facility was originally intended to be the operational centre of Enron's bandwidth arbitrage business. Following Enron's spectacular collapse, the property was acquired in a bankruptcy auction by Rob Roy, reportedly only for $930,000.

Rob Roy, who remains CEO and chairman of the business, had the counter-intuitive insight to build the world’s largest data centre in the desert city of Las Vegas. There are several reasons why Las Vegas could have been a bad choice. First, the geographic location is far away from the financial centres of North America - there are relatively few Fortune 500 headquarters in Las Vegas. Second, Las Vegas is unmistakably situated in a desert. During July, the average daytime high temperature is 40.1C (104F). It is commonly understood that air conditioning is one of the greatest costs in running a data centre, and for this reason hyperscale data centres have been built near the Arctic Circle. Why build one in the desert? Third, Las Vegas is known for gambling and entertainment, but not particularly for high-tech.  If you are looking for hotspots for tech talent, you might think of Silicon Valley, Seattle, Boston, Austin, Ann Arbor or many other locations before picking Las Vegas.

However, each of these objections turned out to be an advantage for Switch thanks to the persistence or innovation of its founder. Regarding its location, the Nevada desert is geographically isolated from other potential geographic disasters.  It is spared from the earthquakes of California, Oregon or Washington. It is not in tornado alley nor is it in the path of any potential hurricane.  The location has no possibility of suffering through a debilitating blizzard, flood or tsunami. The biggest enterprises with the tightest requirements will want to have at least one major data facility out of any potential danger zone. By scaling its data centre campus to an enormous size, the Switch SuperNAP becomes its own gravity centre for attracting clients to the campus. According to the company's website, there are over 1,000 clients now, including big names such as Boeing, eBay, Dell EMC, Intel, JP Morgan Chase and many others.

As for the desert heat, Switch innovations enabling it to nail the energy efficiency challenge. The company's proprietary Thermal Separate Compartment in Facility (T-SCIF) design, which enables an unusually high-density of power load per rack, does not use water cooling. Nor does it use conventional computer room air conditioning units. Key ingredients include a slab concrete floor, hot air containment chambers, high ceilings and a heat exchange system mounted above. HVAC cooling units are outside the building. The company cites a PUE of 1.18 for its data centres in Las Vegas and an estimated 1.20 for its new facility in Reno, Nevada.

Regarding technology innovation, Rob Roy now has 256 patents and patent-pending claims with many focused on his Wattage Density Modular Design (WDMW) data centre design. Talent attracts talent. Whereas some data centre operators describe themselves primarily as real estate investments trusts, Switch positions itself as a technology leader.  One example is its proprietary building management system, which uses more than 10,000 sensors to gather millions of daily data points for dynamically optimising operations.

The Nevada desert enjoys abundant sunshine and since January 2016 all its data centres have operated on 100% renewal energy thanks to two nearby solar power stations operated by the company. These solar farms use PV panels to generate 180 MW of capacity. The focus on renewable power has earned the company an “A” listing on Greenpeace's Clean Company Scorecard, ahead of Apple, Facebook, Google, Salesforce, Microsoft, Equinix and all the others with large-scale data centre operations.

Below is an overview of major facilities and developments (data from the company website and other public sources):


In March 2017, Switch officially opened the first phase of the 1.8 million-square-foot data centre campus in Grand Rapids, Michigan. The iconic building, which is an adaptive reuse of the Steelcase Pyramid, is the centre piece of what is intended to become the largest, most advanced data centre campus in the eastern U.S. The entire campus is powered by green energy.

In February 2017, Switch inaugurated its Citadel Campus in Reno, Nevada (near Tesla’s Gigafactory). The Citadel Campus, located on 2,000 acres of land, aims to be the largest colocation facility in the world when it is fully built. The first building has 1.3 million sq feet of space. It is connected to the Switch SUPERLOOP, a 500-mile fibre backbone built by the company to provide low-latency connectivity to its campus in Las Vegas as well as to the San Francisco Bay Area and Los Angeles.

In December 2016, SUPERNAP International officially opened the 'largest, most advanced' data centre in southern Europe. The new facility is built to the specifications of the company's flagship, Tier IV Gold-rated Switch Las Vegas multi-tenant/colocation data centre. The new facility is located near Milan and includes 42,000 sq meters of data centre space with four data halls.

In January 2016, construction began on a new $300 million SUPERNAP data centre in Thailand’s eastern province of Chonburi. The new SUPERNAP Thailand data centre, which is in the Hemmaraj Industrial Estate, will cover an area of nearly 12 hectares and will be strategically built outside the flood zone, 110-metres above sea level and only 27 km away from an international submarine cable landing station.

Australia's nbn selects Coriant CloudWave

nbn, the company building and operating Australia’s national broadband network, has selected the Coriant CloudWave Optics solution for its existing nationwide optical transport backbone network.

The nbn transcontinental optical transport backbone (known as the Transit Network) spans over 60,000 kilometers of fiber and is built upon the Coriant hiT 7300 Packet Optical Transport Platform. The Transit Network allows nbn to connect the different nbn Multi Technology Mix access nodes to points where the traffic is transferred to service providers, known as Point of Interconnect (POI). The access nodes are the modern equivalent of a local telephone exchange and can be located many thousands of kilometers from their corresponding POI, of which there are 121.

Coriant said the introduction of its CloudWave Optics technology within the existing hiT 7300 network will provide nbn the ability to leverage the industry’s latest advances in high-speed, low latency optical networking, including per-wavelength transmission at speeds of 200G and beyond.

“Coriant’s CloudWave solution will help us in scaling the nbn and connecting 8 million happy homes by 2020. Maximizing the performance of our fiber optic infrastructure is critical as we expand the capacity of the nbn network throughout Australia and enable residential and business customers to take full advantage of fast and reliable broadband,” said Peter Ryan, Chief Network Engineering Officer at nbn.
Deployment of the Coriant flexi-rate solution, which is scheduled to begin in 2017, will target high-traffic routes within the nationwide nbn backbone network.

“Keeping pace with end-user traffic demands while lower operating costs is a challenge shared by network operators and cloud providers around the world,” said Petri Markkanen, Managing
Director, Asia Pacific, Coriant. “Our CloudWave Optics solution provides these operators a powerful toolkit to seamlessly scale to higher speeds while delivering proven ROI through lower power, reduced space, and improved reach performance.”

Coriant intros low power 400 Gbit/s muxponder for Groove G30

Coriant announced the introduction of a new 400 Gbit/s muxponder for its Groove G30 Network Disaggregation Platform that delivers benchmark power consumption of 0.20 watts per gigabit of bandwidth, which is claimed to be 50% less than comparable solutions.

The new Coriant solution also delivers features designed to further improve spectral efficiency and data integrity of high-capacity coherent optical networks, as well as to help operators significantly reduce operating expenses.

Based on 16 nm CMOS technology, the Coriant 400 Gbit/s muxponder, equipped with Coriant's silicon photonics CFP2-ACO and client-side transceivers, is claimed to consume 0.20 watts per Gigabit. The new solution is compatible with deployed Coriant Groove G30 systems, so eliminating the need for forklift upgrades and simplifying pay-as-you-grow scalability for customers.

In addition to low power consumption, the new Groove G30 muxponder improves optical reach and spectral efficiency via support for 200 Gbit/s/8QAM programmable modulation. Leveraging the low power and high density of the system, network operators can deploy a full DWDM transport system with muxponding, optical multiplexing and amplification functionality within a single rack unit delivering up to 1.6 Tbit/s of capacity.

Coriant's new 400 Gbit/s muxponder is designed to remove the need for a separate optical line system, thereby reducing space and power requirements, while the compact and flexible 1 RU configuration makes it suitable for deployments at networking sites with space and power constraints.



  • Earlier this year, Coriant introduced a Short Reach CFP2-ACO pluggable unit for the Groove G30 platform, based on silicon photonics technology from Elenion Technologies. The CFP2-ACO solution enables power-efficient 200 Gbit/s connectivity for carrier transport and data centre interconnect applications.
  • Coriant also introduced the 7300 Open Line System (OLS) solution optimised for deployment with open DCI transponder solutions such as the Groove G30 platform for long haul and data centre interconnect (DCI) applications.

ADVA expands FSP 3000 with cross-connect, sync for metro networks

ADVA Optical Networking announced it has expanded its FSP 3000 platform to address the requirements of metro networks via the introduction of three new technologies.

The expanded ADVA FSP 3000 is designed to bring features to metro network environments that were previously uneconomic. The new solution provides a flexible, automated optical layer that does not utilise traditional ROADM technology, features a new cross-connect that allows scaling of optical transport networks (OTNs) without capacity lock-in, and provides precise synchronisation for 5G without the suffering the limitations of current OTN technology.

ADVA's enhanced FSP 3000 platform leverages three key elements, as follows:

1.         FSP 3000 MicroConnect, a ROADM-based photonic layer that has been cost-optimised for metro networks; the solution consolidates key functions and is designed to minimise footprint, configuration and cabling requirements.

2.         FSP 3000 OpenFabric, a new OTN cross-connect designed to eliminate slot capacity assignments and the proprietary fabric adapters of a closed system, and thereby allow network operators to utilise any mix of optical services and scale as and when necessary.

3.         FSP 3000 TrueTime, which offers a new model for synchronising transport over optical networks to meet the synchronisation requirements of 5G services by implementing time-sensitive technologies that enable optimum performance and the ability to automatically compensate for delay asymmetries.



  • Earlier this year, ADVA enhanced its FSP 3000 CloudConnect platform with the TeraFlex terminal solution, supporting transport at 600 Gbit/s rates over a single wavelength for total duplex capacity of 3.6 Tbit/s in a single rack unit. ADVA claims the TeraFlex terminal enables 50% greater density than competing technology to address the demands of Internet content providers (ICPs) and carrier-neutral providers (CNPs) seeking to scale their DCI networks.
  • ADVA also enhanced the FSP 3000 CloudConnect with direct detect open optical layer functionality, offering an alternative to using traditional coherent solutions. The direct detect technology is available either as an open line system (OLS) in a disaggregated form or as a solution incorporating the terminal and line system.

Nokia Bell Labs demos ultra low latency 10G PON for fronthaul

Nokia has announced that as part of its work to better support mobile fronthaul and latency-sensitive services, Nokia Bell Labs has demonstrated a commercial next generation PON (NG-PON) transporting ultra-low latency CPRI streams over a single fibre connecting the baseband unit (BBU) and remote radio head (RRH).

The proof of concept demonstration was conducted in accordance with the latency budget requirements for the fronthaul of commercial radio equipment, showing that existing fibre networks can be used to transport mobile traffic and help accelerate the roll-out 5G.

Nokia noted that fronthaul comprises a key element of the C-RAN (centralised RAN) architecture in mobile networks, where the processing power is centralised away from cell sites. This model helps operators reduce the cost and power consumption of on-site installations, as well as simplifying cell cooperation schemes that help enhance network capacity and coverage.

In a C-RAN architecture, the legacy common public radio interfaces (CPRI) and certain next generation fronthaul interfaces require ultra-low latency transport, often in the sub-millisecond range, to meet the timing and synchronisation requirements of 4G and 5G technologies.

In the latest demonstration, Nokia Bell Labs validated that next generation PON technology, XGS-PON (10 Gbit/s symmetrical PON), can meet the strict timing constraints and deliver the capacity required, while also reducing the cost of mobile cell site transport. XGS-PON runs over existing fibre access networks and allows operators to use GPON platforms and technology to deliver high capacity services.

Nokia stated that this is a key capability for operators as they seek to address the challenge of supporting 'anyhaul' applications. By removing the need for a separate network, operators can use existing PON infrastructure in FTTH/B deployments to cost-effectively achieve the performance and coverage they require to handle the mobile transport demands resulting from densifying cell sites.

Nokia added that in addition to mobile transport applications, PONs are increasingly seen as an attractive option by operators seeking to support latency sensitive services and IoT applications such as manufacturing control and connected vehicles.

Nokia Bell Labs latest technology breakthrough will help mobile service providers as they move towards implementing 5G, and expands Nokia's Anyhaul mobile transport solutions as well as strengthening its portfolio of converged access networks for the delivery of fixed and mobile services. The company claims that to date it is involved in nine trials or commercial deployments of XGS-PON.

Huawei releases TDM PON combo to support transition to 10G PON

Huawei, which introduced a WDM PON combo solution last year, has announced a new TDM PON combo solution for FTTH deployments designed to facilitate the evolution of GPON to 10 Gbit/s GPON.

The new TDM PON combo solution is designed to enable operators to align upgrades of EPON and GPON solutions, as well as reduce power attenuation introduced by combiners. The solution can also simplify network upgrades and enable the evolution of current networks to support gigabit broadband speeds.

Huawei noted that implementing GPON upgrades requires the deployment of WDM1r combiners to combine GPON and 10 Gbit/s GPON ports, which then result in added attenuation of optical signals.

To enable GPON upgrades via the replacement of boards, Huawei released its WDM PON combo solution in 2016. Using this solution, a PON port integrates three components - GPON, 10 Gbit/s GPON and WDM1r - offering the same upgrade process as for 10 Gbit/s EPON in terms of board replacement. The PON solution is designed to be easy to deploy and does not require additional space or WDM1r devices.

Huawei's new TDM PON combo solution, which is based on the WDM PON combo solution, works by changing the upstream receiving mode of the WDM PON combo optical module into TDM receiving, allowing GPON and 10 Gbit/s PON optical signals to be transmitted in turn. This model helps to simplify the combiner design and the implementation process for PON combo optical modules, as well as providing higher power budgets.

Huawei stated that use of the TDM PON solution is designed to enable equipment vendors to achieve mass production, implement small encapsulation and more easily integrate high-density port solutions.

Huawei noted that the new solution forms part of its UBB strategy, which also includes its next-generation distributed smart OLT and 10 Gbit/s PON ONT products that are in large-scale commercial use with 50+ operators. The company also offers the CloudFAN solution, which supports multi-service bearing over a single fibre.


ZTE unveils compact metro-edge E-OTN

ZTE announced the launch of its metro-edge, elastic and enhanced optical transport network (E-OTN) product, the ZXMP M721 CX66A, during the 2017 Next Generation Optical Networking and Optical Data Centre Interconnect (NGON and Optical DCI) Forum.

The new ZXMP M721 CX66A solution combines high levels of integration with large capacity, intelligence and an energy efficient design and is intended to be simple and quick to deploy. The platform is designed to support service transmission in the convergence and access network layers.

ZTE noted that with the move towards 5G and growth of 'big video' services, demand for bandwidth is increasing rapidly, requiring transport networks delivering very high capacity. The ZXMP M721 CX66A solution is designed to meet the requirements of the 'big bandwidth' era across areas including service access, service transmission, operations and maintenance management and energy efficiency.

ZTE's new ZXMP M721 CX66A platform is a compact E-OTN product that features optical-electrical integration and support for ROADM and centralised electrical cross-connect technologies. The solution implements non-blocking cross-scheduling of optical channel data unit (ODUks), packets (PKTs) and virtual containers (VCs).

In addition, a range of high-order modulation methods are supported, and the board speed on the line side supports rates of up to 200 Gbit/s. The solution also incorporates OTN-lite and low delay technologies to provide support for future 5G network deployments demanding very low latency.


The platform additionally features software-defined optical networking (SDON) technology to enable the creation of an intelligent and open network architecture.


GTT acquires Perseus for $37.5m

GTT Communications based in McLean, Virginia, a global cloud networking provider to multinational clients, announced the acquisition of Perseus, a provider of high-speed network connectivity that serves major financial and e-commerce companies worldwide.

GTT stated that the purchase price for Perseus was $37.5 million, plus the assumption of approximately $3 million in capital leases. GTT anticipates that the purchase price will represent a multiple of post-synergy adjusted EBITDA of 5.0x or lower, with integration and cost synergies to be achieved within two quarters.
GTT noted that the strategic combination with Perseus is intended to deliver benefits including:

1.         Extending the reach of its global, Tier 1 IP backbone via new PoPs and routes connecting key markets across Latin America, Asia Pacific, India and South Africa, including Pacific Express, the new low latency route between Chicago and Tokyo.

2.         Increasing its customer base, bringing clients in the financial service and e-commerce segments.

3.         Expanding its position as a provider of ultra-low latency services, as well as augmenting its cloud networking portfolio with financial market data services.

Perseus operates a global multipoint Ethernet network and 75 PoPs sited in 18 countries and provides connectivity to over 200 exchanges. It maintains a network operations centre in Galway, Ireland. Perseus offers solutions including LiquidPath trading services, PrecisionSync timing services, private managed services and wireless, microwave-based connectivity.


  • In January of this year, GTT completed its acquisition of Hibernia Networks, operator of a global network, including extensive subsea cable systems. Under the terms of an agreement announced in November 2016, GTT was to acquire Hibernia for $590 million, including $515 million in cash and approximately 3.3 million shares of GTT common stock valued at around $75 million.