It is a sign of the times that I need to clearly define the term “cloud services” if I am going to use it as an entry point to this blog. And since I wouldn’t dare assert my position to be expert enough to properly define this term (any attempt would surely bog down this entire effort), I will turn to the main sources of knowledge of our time…
If I type “Cloud services” into Google, the top response is of course a link to Wikipedia. The Wikipedia search for “cloud services” gets redirected to “cloud computing” which is defined as:
Web-based processing, whereby shared resources, software, and information are provided to computers and other devices (such as smartphones) on demand over the Internet.
It is nice to see my opening premise is not far off the mark. Simply put, a cloud service is a web-based service that is delivered from a datacenter somewhere, be that the internet or a private datacenter to “computers.” For now, let’s leave the definition of an endpoint alone. I know that is a big reach, but this is my blog, and it really isn’t the point. The point is that for all of these services, they are generally delivered from a small number of centralized datacenters and consumed at some relatively large number of remote offices.
That is where things get interesting.
If we lived in a world where email and simple web page delivery was the state of the art, well, I wouldn’t have anything to write about, but we don’t. The mainstream services that are being deployed in education, government, and enterprise accounts are ushering in a completely new level of performance requirements on the networks they depend upon. Voice over IP (VoIP), video conferencing, IP based storage systems for file sharing, backup, and disaster recovery, and recently the deployment of virtual desktop services all bring with them new performance requirements. Yes, that means more bandwidth, but that is just the tip of the iceberg. All of these applications also have very real requirements on critical network parameters such as (packet) loss, end to end latency, and jitter. Unlike simple transaction and messaging applications like HTTP delivery and email, when these new “performance sensitive” applications run into in appropriate loss, latency, and jitter, the result is application failure. Dropped calls and video sessions. Failed storage services including backup and recovery, and “blue-screens” where virtual desktop sessions belong. What causes seemingly healthy networks to suffer from latency, loss, and jitter issues? More on that in a later blog……
Successful cloud service delivery to remote sites is dependent on managing performance at that remote site. Not datacenter application performance, or server performance, or network device performance. Service level performance analysis from a remote site is a new topic, and we call it Remote Performance Management or RPM.
Let’s start with the basics, what do we know about RPM.
First, RPM is a location dependent topic. Of course, the traditional datacenter performance management issues need to be dealt with. That is part of datacenter service delivery 101. No debate. But if we care about the service quality that the users are experiencing, then we need to understand performance from the perspective of the end user, at the remote site.
Next, we need to address the complete performance management lifecycle. Simply put, Assess the remote office performance PRIOR to service deployment; Monitor the remote office performance DURING service operations, Troubleshoot issues QUICKLY (like you’re there), and Report on the good, the bad, and the ugly. When you add it all up, you need a broad set of capabilities to meet these needs
Finally, we need to keep it simple, affordable, and scalable. The problem with most solutions around the remote office is not the device cost, but rather the administrative cost.
The bottom line is that if you are attempting to deliver today’s critical services for remote site consumption, you need to understand performance, so you’d better check your RPMs…….