Filed under: Performance Monitoring
Remote monitoring is an old problem. In fact, it was a challenge 150 years ago with the emergence of the so-called Victorian Internet, the global communication network based on electric telegraphy. That “internet” depended on human monitoring and the most basic kinds of performance measures.
Today, no one charged with monitoring distant IT functions at remote locations has the luxury of Victorian-era response times. In an environment where milliseconds count, it’s critical to have virtually instant visibility into traffic and performance, particularly at remote endpoints. And, while growing adoption of cloud services reduces responsibility for infrastructure management, remote monitoring brings its own complexities.
In such a high-stress environment, it’s easy to forget that, at root, monitoring distant IT functions is all about the user.
Of course, everyone knows the whole point of IT is serving users. But it’s easy to get lost in the aggregated details of servers and storage and forget to emphasize the view from the device or desktop. Just because you can’t readily see a slowdown or performance problem doesn’t mean it’s not there. What’s more, when those users are decentralized and sometimes in very remote locations, actually determining what they are experiencing, other than through cryptic complaint forms or help desk reports, can be challenging without the right monitoring technology.
Top Performance Monitoring Features for Remote Users
Now that performance monitoring is often SaaS-based, it is usually easier to deploy and less expensive to operate. Among the wide range of functions that can be part of monitoring tools, three stand out (though they are not always present in each product or service):
1. End-user experience monitoring examines the indicators of what is happening to the actual end user and, to the extent that those indicators reflect that experience, it can help to identify actual performance problems.
2. Transaction monitoring is similar but looks at actual transactions—the tasks performed both by people or systems—and offers insights from that perspective on slowdowns, bottlenecks and other performance issues.
3. Synthetic monitoring could also be described as active monitoring. Rather than simply looking at existing actions and identifying problems after they occur, synthetic monitoring tests sites and applications through scripting to simulate traffic, and can identify problem areas before they otherwise reveal themselves.
In order to not only identify but also diagnose and fix performance issues, it’s helpful if your tool can understand the why of what is happening and its impact on network traffic, as well as the impact on processing and data demand on the server.
Keep in mind that this can be a “fix it when it breaks” approach, or a more proactive approach that helps you optimize operations. Of course, in a dynamic IT environment, repair or optimization can be a moving target, but either way, monitoring is the key enabler.
Remember Remote Office Security and Scale
Ye Olde Victorian Internet was very easy to tap into, literally. Just a few wires and screws switched around, and you could be reading or sending your own coded messages down the line. Obviously, it’s a little more difficult to locate and respond to issues in today’s systems. But monitoring can prove a great simplifier. It can zero in on issues in remote offices, which are a favorite entry point for hacking exploits: Systems may be less standardized, policies and procedures more lax, and personnel less trained. You can get ahead of issues with detailed and ongoing monitoring of what’s normal to spot occurrences that are anomalous.
And finally, keep in mind the need for scalability. In cloud, far more than in an on-premises world, the potential to scale enormously over a short period of time is real. Your monitoring capability needs to be able to go along for the ride.