End-User Monitoring: RUM or Synthetic?
by September 20, 2013

Filed under: Performance Monitoring

AppNeta no longer blogs on DevOps topics like this one.

Feel free to enjoy it, and check out what we can do for monitoring end user experience of the apps you use to drive your business at www.appneta.com.

Performance for end-users is the metric by which most businesses judge their web applications’ performance: is the responsiveness of the application an asset or a liability to the business?  Studies show that users are growing more and more demanding, while average pageloads are getting bigger and bigger–more than doubling in weight since 2010.  Combine that with frequent releases and updates from marketing, and pretty soon the optimization job is never quite done.

Ongoing monitoring application performance from the end-user’s perspective is therefore critical; fortunately, there’s a number of approaches to choose from.  But which one(s) are best?

Real User Monitoring

Bad news for end users!

While the server-side performance of a web application can be measured from looking at HTTP requests in your data center, the full pageload experience–downloading static assets from CDN, rendering the page, executing JavaScript–cannot be seen from that vantage point.  Real user monitoring is the practice of using JavaScript embedded in web pages to gather performance data about the end user’s browsing experience, from the browser’s perspective.

This is a great improvement for business’ understanding of application performance.  The data gathered shows the full timing, based on real pages being loaded, from real browsers, in real locations around the world.  The technology applies to desktop, mobile, and tablet browsers equally well.

The biggest advantage of measuring actual data is that there’s no need to pre-define the important use cases. As each user goes through the application, RUM captures everything, so no matter what pages they see, there will be performance data available. This is particularly important for large sites or complex apps, where the functionality or interesting content is constantly changing.

Thanks to advances in browser  APIs such as the Navigation Timing API, detail in RUM data is better than ever.  It divides the time spent in the browser into time spent building the DOM, and time spent until document.ready is fired. This is a great starting point, and especially for data that’s captured comprehensively over all users, it’s a great point of triage.  If a page was slow, why was it slow? Unfortunately, while RUM provides this starting point, it doesn’t necessarily point to the precise asset.  Additionally, the growing trend of “single page apps”–apps which do not perform full pageloads to gather new data, like GMail or Facebook–do not yield very good RUM data.

Improvements on the horizon such as the Resource Timing API may improve the situation, but right now RUM is primarily useful for understanding whether problems exist anywhere within an application. It even gives some high-level triage — is the problem in the network, the application server, or the end user’s environment? Beyond that, RUM can’t tell the difference between a drop in traffic and a loss of network connectivity. Worse yet, an increase in RUM latency might indicate a degredation in backend performance, or it may just be a temporary increase in the use of a relatively slow report generation feature. To get this information, RUM alone isn’t sufficient.

Synthetic Performance Monitoring

RUM vs Synthetic

Synthetic performance monitoring, sometimes called proactive monitoring, involves having external agents run scripted transactions against a web application.  These scripts are meant to follow the steps a typical user might–search, view product, log in, check out–in order to assess the experience of a user.  Traditionally, synthetic monitoring has been done with lightweight, low-level agents, but increasingly, it’s necessary for these agents to run full web browsers to process JavaScript, CSS, and AJAX calls that occur on pageload.

Unlike RUM, synthetics don’t track real user sessions. This has a couple important implications. First, because the script is executing a known set of steps at regular intervals from a known location, its performance is predictable.  That means it’s more useful for alerting than often-noisy RUM data.  Second, because it occurs predictably and externally, it’s better for assessing site availability and network problems than RUM is–particularly if your synthetic monitoring has integrated network insight.

Many companies actually use this sort of monitoring before getting to production, in the form of integration tests with Selenium. Synthetic transactions in production can actually re-use these same scripts (as long as they don’t change data). As applications get more complex, proxy metrics like load or server availibility become less useful for measuring uptime. Running Selenium scripts against production isn’t a proxy measurement; it precisely measures uptime, providing full confidence that if the synthetic transactions are completing, the site is up and running.

Finally, because synthetics have full control over the client (unlike the sandboxed JS powering RUM), the detail that can be garnered is staggering–full waterfall charts, resource-by-resource performance, and even screenshots/videos of the pageload in action to determine paint times.  This type of insight is currently the best way to understand the performance of state transitions in single page apps, as well.

The Winner?

Looking at synthetics vs RUM, there’s a mess of tradeoffs.  Each seems to be better at certain aspects of performance monitoring than the other, so which one wins?

It may not be a competition after all, but rather two complimentary puzzle pieces–synthetics provide detail, reliability, and availability, while RUM provides a grounding in real user experience.  For this reason, we’re of the mind that the best insight into performance comes from a combination of synthetic and real-user monitoring, and this is why AppNeta provides access to both, integrated into TraceView for maximum insight.

Synthetics + RUM = Crazy Delicious

TraceView users have been taking advantage of this integration for months, but it just got even better last week with the release of Synthetic-RUM comparison.  Now users can plot their real user traffic vs synthetics over time, understand differences in performance across regions, browser types, and synthetic scripts.

RUM vs synthetics.

This allows teams to ensure that their synthetic monitoring is grounded in real-world data, as well as view their synthetic performance in a new, global way.

RUM vs synthetics.

Best of all?  You can try it for free today!  Click here to get started.