Filed under: Performance Monitoring
If you’re suffering from slow application performance, chances are you’re suffering from some degree of latency. As everyone knows, each computer has its own performance limits. With one too many applications, lag will result from the inability of the computer to process all of its inputs. If the application relies on connectivity with some type of remote device, such as a back-up service, an exchange e-mail server or even just video chatting with a colleague over the internet, the performance disruption or failure is a result of the network, rather than the hardware.
The latency on your network defines the minimum wait time before the person or service on the other end receives the packets you send. For a connection between New York and Los Angeles, the minimum possible latency is typically 40ms. In many cases, network traffic and misconfigurations can dramatically increase this time.
However, when it comes to cloud based applications, such as the CRM application we use here at Apparent Networks, both of these factors come into play and exponentially increase response time as they suffer performance losses. The local client for cloud based applications is usually a thin client, or even a web-browser client that only relays inputs to the actual application running on a machine on the cloud. In this case, latency is the lag time for a signal to reach the server (for the NY to LA example, this would be at the lowest 40ms) the processing time for the server to create a response, and the time for it to be sent back to the client machine.
This increase in response time is also exacerbated when the application is an ‘on demand’, or live application, that requires packets to be sent upstream then received downstream whenever an action is taken. An example of this is a virtualized desktop. Because even the smallest action requires data to be sent, each individual action will be subject to the round trip latency on your network.
Latency is a problem – and one that we can’t afford to risk. So how do we ensure that latency stays at a minimum? For managed devices and services, optimization is up to the service provider. For localized devices and services, there are a plethora of tools that can determine and alert you on service quality drops. However, with an increasingly network dependant world, how do we detect and address increases in response time on our carrier networks and externally managed services?
Without a network monitoring tool that can determine exactly what the problem is and pinpoint where on the network it is happening, troubleshooting is virtually impossible. If the performance issue is occurring because of a lack of bandwidth, searching for lagging devices will not help. If the latency is occurring across the WAN, you have no power to make optimizations without proof. PathView Cloud is one tool that manages network performance, including latency, up the path to remote applications and back to the source to detect real time changes in network performance. Learn more about PathView Cloud’s reporting capabilities here!