Saturday, September 9, 2017

Google Cloud Platform adds Dedicated Interconnect

Google Cloud Platform (GCP) is lauching Dedicated Interconnect at up to 80 Gb/s through a number of locations.

This allows customers to extend their corporate datacenter network and RFC 1918 IP space into Google Cloud as part of a hybrid cloud deployment.

Google said its Dedicated Interconnect offers increased throughput and even a potential reduction in network costs. It is also expected to bring advantages to applications with very large data sets.

Dedicated Interconnect is available in 10 Gb/s increments:

  • 10 Gb/s
  • 20 Gb/s (2 x10 Gb/s) 
  • 30 Gb/s (3 x10 Gb/s)
  • 40 Gb/s (4 x10 Gb/s)
  • 50 Gb/s (5 x10 Gb/s)
  • 60 Gb/s (6 x10 Gb/s)
  • 70 Gb/s (7 x10 Gb/s)
  • 80 Gb/s (8 x10 Gb/s)

Dedicated Interconnect can be configured to offer a 99.9% or a 99.99% uptime SLA.

https://cloud.google.com/interconnect/

Network service tiers become part of the public cloud discussion

An interesting development has just come out of the Google Cloud Platform.

Until now, we’ve seen Google internal global network growing by leaps and bounds. There is a public facing network as well with peering points all over the globe that seemingly must always keep ahead of the growth of the Internet overall. The internal network, however, which handles all the traffic between Google services and its data centers, was said to grow much faster. It is this unmatched, private backbone that is one of the distinguishing features of Google Compute Engine (GCE), the company’s public cloud service.

GCE provides on-demand virtual machines (VMs) running on the same class of servers and in the same data centers as Google’s own applications, including Search, Gmail, Maps, YouTube, Docs, etc.  The service delivers global scaling using Google’s load balancing over the private, worldwide fiber backbone.  This service continues and becomes knows as the “Premier Tier”.

The new option, called the “Standard Tier,” directs the user’s outbound application traffic to exit the Google network at the nearest IP peering point. From there, the traffic traverses ISP network(s) all the way to its destination. Google says this option will be lower performance but lower cost. For Google, the cost savings come from not having to use as much long-haul bandwidth to carry the traffic and to consume fewer resources in load-balancing traffic across regions, or failing over to other regions in the event of an anomaly.  In a similar way, inbound traffic travels over ISP networks until it reaches the region of the Google data center where the application is hosted. At that point, ingress occurs on the Google network.




Google has already conducted performance tests of how its Premier Tier measures up to Standard tier. The tests, which were conducted by Cedexis, found that Google’s own network delivers higher throughput and lower latency than a Standard tier with more routing hops and operating over third party network(s). Test data from the US Central region from mid-August indicate that the Standard Tier was delivering around 3,051kbps throughput while Premier Tier was delivering around 5,303kbps – or roughly a 73% performance boost in throughput. For latency in the US Central region, the Standard Tier was measured at 90ms for the 50th percentile, while the Premium Tier was measured at 77ms, roughly a 17% performance advantage.

Looking at the pricing differential

The portal for the Google Cloud platform shows a 17% savings for Standard Tier for North America to North America traffic.



Some implications of network service tiering

The first observation is that with these new Network Service Tiers, Google is recognizing that its own backbone is not a resource with infinite capacity and zero cost that can be used to carry all traffic indiscriminately. If the Google infrastructure is transporting packets with greater throughput and lower latency from one side of the planet to another, why shouldn’t they charge more for this service?
The second observation is that network transport becomes a more important consideration and comparison point for public cloud services in general.

Third, it could be advantageous for other public clouds to assemble their own Network Service Tiers in partnership with carriers. The other hyperscale public cloud companies also operate global-scale, private transport networks that outperform the hop-by-hop routing of the Internet.  Some of these companies are building private transatlantic and transpacific subsea cables, but building a private, global transport network at Google scale is costly.  Network service tiering should bring many opportunities for partnerships with carriers.

IBM and MIT to open Artificial Intelligence lab

IBM announced a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT.

The MIT–IBM Watson AI Lab aims to advance AI hardware, software and algorithms related to deep learning and other areas, increase AI’s impact on industries, such as health care and cybersecurity, and explore the economic and ethical implications of AI on society.

The lab will be co-chaired by IBM Research VP of AI and IBM Q, Dario Gil, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering.

"The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” said Dr. John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade."

“I am delighted by this new collaboration,” says MIT President L. Rafael Reif. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”

http://www-03.ibm.com/press/us/en/pressrelease/53091.wss