by Shannon Weyrick, Director of Engineering for NS1
Over the past 25 years there have been dramatic shifts in how companies deliver websites and applications. The pervasiveness of globally distributed cloud computing providers like AWS and Digital Ocean, along with the rise of Infrastructure as a Service (IaaS) and deployment automation, have dramatically reduced the costs and complexities of deploying applications. Users today can deploy servers in different parts of the world in minutes and leverage a multitude of software frameworks, databases and automation tools that all work to decentralize environments and improve uptime and performance.
The result is one of the more fundamental changes in the recent history of computing: today’s applications are distributed by default.
Unique Traffic Management Challenges for Modern Applications
While we’ve seen significant progress toward distributing applications on the infrastructure and application side, the tools website operators have at their disposal to effectively route traffic to their newly distributed applications haven’t kept pace. Your app is distributed, but how do you get your users to the right points of presence (POPs)?
Today, traffic management is typically accomplished through prohibitively complex and expensive networking techniques like BGP anycasting, capex-heavy hardware appliances with global load balancing add-ons, or by leveraging a third party Managed DNS platform.
As the ingress point to nearly every application and website on the Internet, DNS is a great place to enact traffic management policies. However, the capabilities of most Managed DNS platforms are severely limited because they were not designed with today’s applications in mind. For instance, most managed DNS platforms are built using off-the-shelf software like BIND or PowerDNS, onto which features like monitoring and geo-IP databases are grafted.
Until recently, a state-of-the-art DNS platform could do two things with regards to traffic management: first it wouldn’t send users to a server that was down, and second it would try to return the IP address of the server that’s closest to the end user making the request.
This is a bit like using a GPS unit from 1999 to get to a gas station: it can give you the location of one that’s close by and maybe open according to its Yellow Pages listing, but that’s about it. Maybe there is roadwork or congestion on the one route you can take to get there. Maybe the gas station is out of diesel, or perhaps they’re open but backed up with lines stretching down the block. Perhaps a gas station that’s a bit farther away would have been a better choice?
High-performing Internet properties face similar challenges in digital form, and they go far beyond proximity and a binary notion of “up/down.” Does the datacenter have excess capacity? What’s traffic like getting there - is there a fiber cut or congestion to a particular ISP we should route around? Are there any data privacy or protection protocols we need to take into account?
Intelligent DNS
Today’s data-driven application delivery models require a new way of managing DNS traffic. Next-gen DNS platforms have been built from the ground up with traffic management at their core, bringing to market exciting capabilities and innovative new tools that allow businesses to enact traffic management in ways that were previously impossible.
Here are five best practices to consider when implementing an advanced, intelligent traffic management platform:
For businesses that need to deliver Internet-scale performance and reliability for high-volume, mission-critical applications, they must rethink their current DNS and traffic management capabilities. Traditional DNS technologies are fractured and rudimentary, making the industry ripe for disruption in order to accommodate today’s demanding applications.
Tomorrow’s modern distributed application delivery will be supported by converging dynamic, intelligent and responsive routing technologies. Whether you’re building the next big thing or you’ve already made it to the Fortune 500, best practices suggest that it’s time to evaluate current DNS and traffic management platforms with an eye on solving previously intractable problems and improving performance for webscale applications.
About the author
Shannon Weyrick is the director of engineering for NS1 and has been working in Internet infrastructure since 1996, when he got started at an ISP in upstate New York. He’s been programming, however, since time immemorial, and loves it to this day. Shannon can find his way around any full backend stack, but he’s focused on software development, and has created or contributed to many open source projects throughout the years. Shannon previously worked at Internap and F5 Networks architecting and developing distributed platforms for a variety of infrastructure projects.
Over the past 25 years there have been dramatic shifts in how companies deliver websites and applications. The pervasiveness of globally distributed cloud computing providers like AWS and Digital Ocean, along with the rise of Infrastructure as a Service (IaaS) and deployment automation, have dramatically reduced the costs and complexities of deploying applications. Users today can deploy servers in different parts of the world in minutes and leverage a multitude of software frameworks, databases and automation tools that all work to decentralize environments and improve uptime and performance.
The result is one of the more fundamental changes in the recent history of computing: today’s applications are distributed by default.
Unique Traffic Management Challenges for Modern Applications
While we’ve seen significant progress toward distributing applications on the infrastructure and application side, the tools website operators have at their disposal to effectively route traffic to their newly distributed applications haven’t kept pace. Your app is distributed, but how do you get your users to the right points of presence (POPs)?
Today, traffic management is typically accomplished through prohibitively complex and expensive networking techniques like BGP anycasting, capex-heavy hardware appliances with global load balancing add-ons, or by leveraging a third party Managed DNS platform.
As the ingress point to nearly every application and website on the Internet, DNS is a great place to enact traffic management policies. However, the capabilities of most Managed DNS platforms are severely limited because they were not designed with today’s applications in mind. For instance, most managed DNS platforms are built using off-the-shelf software like BIND or PowerDNS, onto which features like monitoring and geo-IP databases are grafted.
Until recently, a state-of-the-art DNS platform could do two things with regards to traffic management: first it wouldn’t send users to a server that was down, and second it would try to return the IP address of the server that’s closest to the end user making the request.
This is a bit like using a GPS unit from 1999 to get to a gas station: it can give you the location of one that’s close by and maybe open according to its Yellow Pages listing, but that’s about it. Maybe there is roadwork or congestion on the one route you can take to get there. Maybe the gas station is out of diesel, or perhaps they’re open but backed up with lines stretching down the block. Perhaps a gas station that’s a bit farther away would have been a better choice?
High-performing Internet properties face similar challenges in digital form, and they go far beyond proximity and a binary notion of “up/down.” Does the datacenter have excess capacity? What’s traffic like getting there - is there a fiber cut or congestion to a particular ISP we should route around? Are there any data privacy or protection protocols we need to take into account?
Intelligent DNS
Today’s data-driven application delivery models require a new way of managing DNS traffic. Next-gen DNS platforms have been built from the ground up with traffic management at their core, bringing to market exciting capabilities and innovative new tools that allow businesses to enact traffic management in ways that were previously impossible.
Here are five best practices to consider when implementing an advanced, intelligent traffic management platform:
- Intelligent routing: Look for solutions that route users based on their ISP, ASN, IP prefix or geographical location. Geofencing can ensure users in the EU are only serviced by EU datacenters, for instance, while ASN fencing can make sure all users on China Telecom are served by Chinacache. Using IP fencing will make sure local-printer.company.com automatically returns the IP of your local printer, regardless of which office an employee is visiting.
- Leverage load shedding to prevent meltdowns: Automatically adjusting the flow of traffic to network endpoints, in real time, based on telemetry coming from endpoints or applications, can help prevent overloading a datacenter without taking it offline entirely, and seamlessly route users to the next nearest datacenter with excess capacity.
- Enact business rules: Meet your applications’ needs with filters that use weights, priorities and even stickiness by enacting business rules. Distribute traffic in accordance with commits and capacity. Combine weighted load balancing with sticky sessions (e.g. session affinity) to adjust the ratio of traffic distributed among a group of servers while ensuring that returning users continue to be directed to the same endpoint.
- Route around problems: Identify solutions that provide the ability to constantly monitor endpoints from the vantage point of the end user and then send those coming from each network to the endpoint that will service them best.
- Cloud burst: Leverage ready-to-scale infrastructure to handle planned or unplanned traffic spikes. If your primary colocation environment is becoming overloaded, make sure you're are able to dynamically send new traffic to another environment according to your business rules, whether it’s AWS, the next nearest facility or a DR/failover site.
For businesses that need to deliver Internet-scale performance and reliability for high-volume, mission-critical applications, they must rethink their current DNS and traffic management capabilities. Traditional DNS technologies are fractured and rudimentary, making the industry ripe for disruption in order to accommodate today’s demanding applications.
Tomorrow’s modern distributed application delivery will be supported by converging dynamic, intelligent and responsive routing technologies. Whether you’re building the next big thing or you’ve already made it to the Fortune 500, best practices suggest that it’s time to evaluate current DNS and traffic management platforms with an eye on solving previously intractable problems and improving performance for webscale applications.
About the author
Shannon Weyrick is the director of engineering for NS1 and has been working in Internet infrastructure since 1996, when he got started at an ISP in upstate New York. He’s been programming, however, since time immemorial, and loves it to this day. Shannon can find his way around any full backend stack, but he’s focused on software development, and has created or contributed to many open source projects throughout the years. Shannon previously worked at Internap and F5 Networks architecting and developing distributed platforms for a variety of infrastructure projects.
Got an idea for a Blueprint column? We welcome your ideas on next gen network architecture.
See our guidelines.
See our guidelines.