Filed under: Performance Monitoring
Performance for end-users is the metric by which most businesses judge their web applications’ performance: is the responsiveness of the application an asset or a liability to the business? Studies show that users are growing more and more demanding, while average pageloads are getting bigger and bigger–more than doubling in weight since 2010. Combine that with frequent releases and updates from marketing, and pretty soon the optimization job is never quite done.
Ongoing monitoring application performance from the end-user’s perspective is therefore critical; fortunately, there’s a number of approaches to choose from. But which one(s) are best?
Real User Monitoring
This is a great improvement for business’ understanding of application performance. The data gathered shows the full timing, based on real pages being loaded, from real browsers, in real locations around the world. The technology applies to desktop, mobile, and tablet browsers equally well.
The biggest advantage of measuring actual data is that there’s no need to pre-define the important use cases. As each user goes through the application, RUM captures everything, so no matter what pages they see, there will be performance data available. This is particularly important for large sites or complex apps, where the functionality or interesting content is constantly changing.
Thanks to advances in browser APIs such as the Navigation Timing API, detail in RUM data is better than ever. It divides the time spent in the browser into time spent building the DOM, and time spent until document.ready is fired. This is a great starting point, and especially for data that’s captured comprehensively over all users, it’s a great point of triage. If a page was slow, why was it slow? Unfortunately, while RUM provides this starting point, it doesn’t necessarily point to the precise asset. Additionally, the growing trend of “single page apps”–apps which do not perform full pageloads to gather new data, like GMail or Facebook–do not yield very good RUM data.
Improvements on the horizon such as the Resource Timing API may improve the situation, but right now RUM is primarily useful for understanding whether problems exist anywhere within an application. It even gives some high-level triage — is the problem in the network, the application server, or the end user’s environment? Beyond that, RUM can’t tell the difference between a drop in traffic and a loss of network connectivity. Worse yet, an increase in RUM latency might indicate a degredation in backend performance, or it may just be a temporary increase in the use of a relatively slow report generation feature. To get this information, RUM alone isn’t sufficient.
Synthetic Performance Monitoring
Unlike RUM, synthetics don’t track real user sessions. This has a couple important implications. First, because the script is executing a known set of steps at regular intervals from a known location, its performance is predictable. That means it’s more useful for alerting than often-noisy RUM data. Second, because it occurs predictably and externally, it’s better for assessing site availability and network problems than RUM is–particularly if your synthetic monitoring has integrated network insight.
Many companies actually use this sort of monitoring before getting to production, in the form of integration tests with Selenium. Synthetic transactions in production can actually re-use these same scripts (as long as they don’t change data). As applications get more complex, proxy metrics like load or server availibility become less useful for measuring uptime. Running Selenium scripts against production isn’t a proxy measurement; it precisely measures uptime, providing full confidence that if the synthetic transactions are completing, the site is up and running.
Finally, because synthetics have full control over the client (unlike the sandboxed JS powering RUM), the detail that can be garnered is staggering–full waterfall charts, resource-by-resource performance, and even screenshots/videos of the pageload in action to determine paint times. This type of insight is currently the best way to understand the performance of state transitions in single page apps, as well.
Looking at synthetics vs RUM, there’s a mess of tradeoffs. Each seems to be better at certain aspects of performance monitoring than the other, so which one wins?
It may not be a competition after all, but rather two complimentary puzzle pieces–synthetics provide detail, reliability, and availability, while RUM provides a grounding in real user experience. For this reason, we’re of the mind that the best insight into performance comes from a combination of synthetic and real-user monitoring, and this is why AppNeta provides access to both, integrated into TraceView for maximum insight.
Synthetics + RUM = Crazy Delicious
TraceView users have been taking advantage of this integration for months, but it just got even better last week with the release of Synthetic-RUM comparison. Now users can plot their real user traffic vs synthetics over time, understand differences in performance across regions, browser types, and synthetic scripts.
This allows teams to ensure that their synthetic monitoring is grounded in real-world data, as well as view their synthetic performance in a new, global way.
Best of all? You can try it for free today! Click here to get started.