How (and Why) to Benchmark Your Public Cloud Performance
by April 9, 2018

Filed under: Cloud Computing, Performance Monitoring

Back in 2011, the average company spent just $6,300 on cloud computing. That’s a drop in the bucket for all but the smallest IT budgets. Things have changed in the intervening years. At the beginning of the decade, companies were just beginning to dip their toes into the cloud. Now they’re completely submerged. Out of companies with more than 1,000 employees, 26% say that they’re spending more than $6 million per year on the cloud—and 71% of enterprises plan to grow their spending by more than 20%.

Given that so many companies are spending so much money on cloud services, it’s important for them to know whether they’re getting their money’s worth. That brings us to an important question. Most cloud services providers will offer a certain standard of quality in their SLA, offering terms such as:

  • Guaranteed availability—for example, 99.999% uptime
  • Responsiveness guaranteeing against application slowdowns
  • Certain expectations for how long it will take to resolve an incident

Depending on the provider you choose, you might be paying quite a bit extra for these guarantees—but how do you know if your provider is backing them up? It’s one thing to realize that your cloud service is going down often. It’s another thing to tell the difference between 99.999% uptime and 99.997%. Therefore, it’s important for cloud customers to benchmark their public clouds in order to determine whether they’re getting what they’re paying for in terms of stability and service. Here’s how:

Cloud Benchmarking Tools and Methods

There are three important areas to measure when benchmarking the public cloud: compute, storage and network. Deficiencies in your network create the most obviously catastrophic effects on your public cloud, of course, but you should also be measuring the IOPS and latency of your storage, the disk IO of your compute and so on. Within your network, you should be measuring availability, throughput and latency.

It’s simpler to benchmark things like uptime than you might expect:

  • Both AWS and Azure offer built-in tools that can monitor uptime.
  • If you don’t trust them, or use a different service, there are a number of third-party tools. For example, Gartner offers a service known as CloudHarmony.
  • Run a tool such as iPerf between your host and the cloud in order to monitor basic bandwidth and latency.
  • Tools like Geekbench can monitor aspects such as CPU and database performance.

It’s important to hold cloud providers to account. We’re not saying that cloud providers are untrustworthy, but some of the numbers they put out might be confusing. Last year, for example, an industry survey by CloudHarmony pointed out that Google had the highest availability of the three major cloud providers. Microsoft disputed those claims, however, saying that its larger number of regions meant that a regional average would show better performance.

In other words, the best way to make sure you’re getting the best value for money in the cloud is to trust but verify.

Monitor Cloud Performance Metrics with AppNeta

Of course, one essential measurement that almost no SLAS contain is how well the cloud is performing. And many legacy monitoring tools can’t see those metrics, either. Why install and run up to four different monitoring tools (and probably more) when you can use just one? AppNeta represents a detailed monitoring platform that can monitor every aspect of a public cloud implementation. This includes metrics such as uptime and downtime, plus application slowdowns, latency, jitter and more. Accurate, up-to-the-second monitoring capabilities mean a great deal when negotiating with cloud providers. For example, if you can prove that your AWS implementation has less than 99.9% uptime, you’ll be entitled to up to a 25% service credit—nothing to sneeze at in an era of expensive cloud deployments. For more on monitoring into cloud providers’ networks, check out our guide to cloud visibility.