(This post originally appeared on Dyn.com, a company proud to lend some expertise and insight into Speed Awareness Month.)
At Dyn, we continually obsess over the performance of our clients’ web sites, dynamic applications and rich content that they provide to their customers. We help our clients win the latency battle by taking away the pain associated with DNS lookup latency, a hidden gremlin in the waterfall charts that plague web developers worldwide.
For most sites, DNS is just DNS and most people think there is nothing they can do about it. But for our DynECT Managed DNS clients, they know that we optimize that problem right away for them, accomplishing this through a variety of different ways that I’ll address in this post.
It’s All About Physics
The first reality that we deal with day in and day out is that the global Internet is built on a massive network of fiber optic cable laid across the world, following major highways and railroads, and dropped into the depths of the world’s oceans. Data packets in fiber optic cable can travel at approximately half of the speed of light (300,000 kilometers/second), so when we do the math, that translates to about 1 millisecond of network latency for every 200 kilometers of fiber traversed.
Additionally, for most network measurements, we use Round Trip Time (RTT), so we divide the distance in half, yielding 1ms of latency for every 100 kilometers of fiber traversed. In the battle against latency, it means we have to be careful to locate our DNS PoPs in places where we minimize the length of fiber optics runs between our DNS servers and our clients’ customers.
This is the reason why we collocates our equipment in major central hub data centers around the world and in cities and locations where they are dense numbers of networks connecting to one another in the same building. By connecting our DNS PoPs in locations where networks connect, we minimize the amount of fiber DNS packets have to traverse.
Even though we’re proud New Hampshirites, this is the reason why we don’t run production services from our Corporate HQ in Manchester as New Hampshire is just a spoke off of the New York City hub of the Internet.
It’s All About The Electronics
While it doesn’t come up often, the networking equipment used in our data centers is intentionally chosen to reduce the delays in processing DNS queries. We meticulously analyze our network topologies to reduce the number of network interfaces, routers, switches and servers every DNS packet must traverse as each of these devices introduce serialization and queuing delays to the packet path.
Every time a packet flows from one device to another, we incur a tax on copying the packets in the device’s buffers, running them through the electrical-to-electrical or electrical-to-optical framers and getting them on the wire. This means that our network designs are intentionally kept flat so as to reduce the latency caused by networking equipment.
This also means that we’re very choosy about the server equipment we’re using. When evaluating server platforms, we’re looking at the network interface cards and their serialization times, the bus speed of the system (1333MHz only, please), and the ability for the processor to copy data from memory (GTs/sec) for DNS processing. This also means we’re compulsive about reducing delays due to kernel packet processing routines and regularly optimize kernels and other code bases for speed.
It’s About The Routing
If DNS is the routing system for HTTP (which serves the websites we all love) and other applications, then BGP is the routing layer for DNS. As a DNS service operator, we run our own network of BGP-enabled routers connecting to our Internet Service Providers (ISPs). While most networks in the world have no need (or no idea) how to run BGP, we choose to do so to maintain complete control over our anycast routing system. This means we can optimize the paths that our clients’ customers connect to our DNS servers with by adjusting our BGP parameters.
It also means that we consistently connect and announce routes to our upstream ISPs. When we connect to an ISP (currently NTT Communications, Tata Communications, TeliaSonera and Cogent), we connect in as many worldwide locations as physically possible, and we announce our routes consistently. Due to the network-by-network routing techniques employed in BGP, it means we never “drag” DNS packets out of their intended region. This comes at a significant additional cost to our operations, but it means that we offer low-latency DNS responses for all Internet connections. It’s the RIGHT WAY to operate an anycast network.
It’s About The Monitoring
Monitoring the performance of our DNS network isn’t part of our jobs; it’s built into the DNA of our company and our staff and is something we take very seriously. We monitor our network using third-party monitoring providers such as Catchpoint as well as from our own systems running in the Amazon AWS cloud and finally via our own internal systems. This means we can see how our network is performing externally from the perspective of the backbone, from the cloud and internally. Additional measurements are taken from cable modem and DSL users around the world through some of our secret sauce, capturing real world DNS behavior.
We’re public about the results of our monitoring, too. You can find public data about our system on DNS Comparison and DNS Reviews, two websites setup by industry advocates providing transparency into our world.
For what its worth, here’s how we did in February 2011 as viewed from Catchpoint’s external monitoring perspective:
Now that’s #winning.
Tom (@tomdyninc) is one of the founders of Dyn, and over the years worked hard to earn the title of CTO. He is a technologist who is passionate about performance and optimization of DNS and Email Delivery services, and happens to have a particular knowledge of how the Internet backbone works at scale. He has a B.S. in Electrical and Computer Engineering from Worcester Polytechnic Institute (WPI).