An interesting development has just come out of the Google Cloud Platform.
Until now, we’ve seen Google internal global network growing by leaps and bounds. There is a public facing network as well with peering points all over the globe that seemingly must always keep ahead of the growth of the Internet overall. The internal network, however, which handles all the traffic between Google services and its data centers, was said to grow much faster. It is this unmatched, private backbone that is one of the distinguishing features of Google Compute Engine (GCE), the company’s public cloud service.
GCE provides on-demand virtual machines (VMs) running on the same class of servers and in the same data centers as Google’s own applications, including Search, Gmail, Maps, YouTube, Docs, etc. The service delivers global scaling using Google’s load balancing over the private, worldwide fiber backbone. This service continues and becomes knows as the “Premier Tier”.
The new option, called the “Standard Tier,” directs the user’s outbound application traffic to exit the Google network at the nearest IP peering point. From there, the traffic traverses ISP network(s) all the way to its destination. Google says this option will be lower performance but lower cost. For Google, the cost savings come from not having to use as much long-haul bandwidth to carry the traffic and to consume fewer resources in load-balancing traffic across regions, or failing over to other regions in the event of an anomaly. In a similar way, inbound traffic travels over ISP networks until it reaches the region of the Google data center where the application is hosted. At that point, ingress occurs on the Google network.
Google has already conducted performance tests of how its Premier Tier measures up to Standard tier. The tests, which were conducted by Cedexis, found that Google’s own network delivers higher throughput and lower latency than a Standard tier with more routing hops and operating over third party network(s). Test data from the US Central region from mid-August indicate that the Standard Tier was delivering around 3,051kbps throughput while Premier Tier was delivering around 5,303kbps – or roughly a 73% performance boost in throughput. For latency in the US Central region, the Standard Tier was measured at 90ms for the 50th percentile, while the Premium Tier was measured at 77ms, roughly a 17% performance advantage.
Looking at the pricing differential
The portal for the Google Cloud platform shows a 17% savings for Standard Tier for North America to North America traffic.
Some implications of network service tiering
The first observation is that with these new Network Service Tiers, Google is recognizing that its own backbone is not a resource with infinite capacity and zero cost that can be used to carry all traffic indiscriminately. If the Google infrastructure is transporting packets with greater throughput and lower latency from one side of the planet to another, why shouldn’t they charge more for this service?
The second observation is that network transport becomes a more important consideration and comparison point for public cloud services in general.
Third, it could be advantageous for other public clouds to assemble their own Network Service Tiers in partnership with carriers. The other hyperscale public cloud companies also operate global-scale, private transport networks that outperform the hop-by-hop routing of the Internet. Some of these companies are building private transatlantic and transpacific subsea cables, but building a private, global transport network at Google scale is costly. Network service tiering should bring many opportunities for partnerships with carriers.