Is your network speed and performance not exactly what you thought it would be? People often refer to the speed of their connection by stating what their bandwidth is. But that’s not all that goes into the speed and performance of a data network. You also need to look at your latency. Latency is how long it takes a data packet to traverse a network. Latency directly impacts the performance of your network service. Unless a company is running a very specific, latency-sensitive service over a private line network, latency is an often overlooked component to a service.
This blog will not be discussing anything specific to ultra-latency sensitive applications such as High Frequency Trading (HFT). HFT and the network/routes that support it are very specialized; we can address that in another post. But for now, we will touch on what latency is, why it’s important, and how to generally calculate latency for a proposed network.
What is speed? Speed is the end result of combined latency and bandwidth. Consider this plumbing analogy. Bandwidth is like the diameter of the pipe. Latency is how quickly the water can move through the pipe. If you increase bandwidth, and you see the time to download files has increased, that is because your network was bottlenecked. One problem might be with the router Ports, which, to use the plumbing analogy again, are like the handles on the faucet. The carriers control your bandwidth by provisioning these Ports on their routers to whatever your agreement states.
Latency in the data network is dependent on a few things. The biggest factor is distance. We can’t send data any faster than the speed of light. But there are other things that can increase latency, such as how many “hops” the traffic takes. The more equipment that is in the way, the more routers/switches there are that analyze and forward the traffic on its way to its destination. These all add to the latency and are applied to both standard internet circuits and private lines.
What questions should you keep in mind when calculating latency as it applies to your business?
What is the difference between round-trip-time (RTT) and one-way latency?
RTT is the time it takes for a signal to travel from the origin to its destination and then back again. One-way latency refers to only one leg of that trip (i.e. from one end point to a destination end point, but not back again). Typically, when vendors mention latency requirements, they are referring to RTT, but it’s always good practice to double check. This may also be referred to as Round Trip Delay (RTD)
Does it make sense to name your preferred fiber path on your contract or service agreement?
Yes—this will help to ensure that your provider doesn’t route your circuit through a suboptimal fiber route. Doing this will also ensure that the circuit cannot be “groomed”, or moved during standard carrier network maintenance events. Adding Custom or Dedicated routes could also have an impact on the overall cost of the service among some providers.
Can VPNs increase latency?
Kind of. Fragmentation/reassembly and encryption/decryption can increase latency. Fragmentation comes in when a file must be broken up into more packets in order to traverse the network. This is because the VPN adds some overhead to your data packets. Exactly how much depends on the VPN protocol used. Network latency is not exactly affected by this, but it will take a little longer to send the traffic from point A to point B due to this additional data being added. Encryption/decryption doesn’t exactly add to network latency either, but it does take your equipment time to add or remove the security encryption. For most users, this is not a huge consideration, unless you are transporting massive amounts of data or running an application that is extremely sensitive to latency.
What is the difference between latency and ping?
The two concepts are related, and sometimes (though not always) interchangeable. Latency refers to the amount of time it takes for a signal to travel from one destination (such as a data center) to another. Ping refers to the amount of time it takes for a device (such as a PC or smartphone) to communicate with a website’s server.
How can you estimate the latency of a wavelength between metro areas?
You can start by examining your carrier’s fiber map. KMZ file types have proven to be the most accurate thus far.
- First, measure the fiber path between the two metro areas, and then add 5%-10% to account for any slowdowns at various turns or junctures in that path.
- Double that result to get the round-trip distance.
- Finally, divide that amount by 124 miles/millisecond.
- You can also add an additional 2 milliseconds to account for the various switches, routers, and telecommunications equipment across a typical WAN circuit.
- This final number will give you a high level estimate of the latency for the wavelength between the two end points.
Though this discussion is by no means exhaustive, we hope that it helps to answer any questions you might have about latency. Please contact a GCN representative today to continue this conversation. We’re experts in developing optimal networking solutions for our customers, and we’re passionate about simplifying and streamlining the solutions-building process.