Last week, I presented Caching Up and Down the Stack at the Boston Web Performance meetup. It was great to get the chance to present to the 60+ people who came out for the talk. Unsurprisingly, many of the people there knew a lot about caching in all of the different levels I touched on, and some great conversation developed.
I covered six of the major areas of caching available to web devs today. On the HTML / JS / CSS side, you can use client asset caches, full-page HTTP caches (like Varnish) and partial template caches. On the back end, you can use generated code caches, manually cache objects in memory or nearby services or even enable your database’s query cache (though, as the audience reminded me, if you’re using MySQL, just don’t. See below for why). Caching at each layer has advantages and disadvantages, and throughout, you’ll only be effective if you know what you’re caching, why you’re caching it and if you measure the improvement of doing so. If you want to know more, check out the slides on Slideshare.
Obviously, this is a huge topic, and I didn’t come anywhere close to doing it justice. The audience came up with several interesting points, both during and afterwards.
1. One of the first questions I got was about a comparison between a cached an uncached page load. I showed this graph:
The main takeaway from this is the drop in bandwidth. 500k isn’t a trivial amount to download, especially when spread over 20-odd files, and doubly so when on a mobile browser. That’s a great improvement when cached, but somebody asked, “Why did the number of requests change?” It turns out that specifying a file as cacheable doesn’t necessarily imply that a connection to the server is unnecessary. For instance, files with an etag header set will validate against the server, and the server will return a 304 Unmodified if the etag matches. On the other hand, if the Cache-Control header is set with “public” and an appropriate max-age directive, the content doesn’t have to be validated, and the browser may entirely eliminate the HTTP request.These days, this can actually be quite effective, as the ratio of time spent downloading content vs. establishing connections is lower than it has ever been. If you’ve got content that doesn’t change (because it’s versioned in the filename, for instance), set it as Cache-Control: public, max-age=large-number!
2. In the words of one wise audience member, “The MySQL query cache sucks. Never use it.” I don’t disagree. But why? Consider the following properties of this cache:
- It only caches exact matches on queries.
- It is invalidated every time anything in the table changes.
If you think about your data and access patterns, there’s a good chance one or both of these properties make the MySQL query cache unsuitable for production use. Few apps have large, static data sets and unpredictable but highly repetitive queries. In most cases, the query cache will spend its time in two states – cold and empty, or flushing itself. Neither will make your application appreciably faster.
Instead, consider caching objects at a coarser granularity once you’ve retrieved them from the DB, or consider adding a proper set of indices to ensure that common queries to your tables are quick.
3. Though I spent most of my time on object caching, I neglected to mention one of the most effective places to cache language objects: in memory on the machine it’s used on! Object caching is generally any type of caching that’s done by retrieving an object from a slower data store (that may involve an expensive computation), and storing the result in an easy-to-lookup place. While memcache in great for this, and has the advantage of scaling independently of your app itself, sometimes just stashing an object in a global dictionary can be the best solution. Plus, especially if the set is bounded or never changes, invalidation and cleanup can be as easy as waiting until your next deploy to reboot the app servers!
All in all, it was great to see everybody out, and I’m looking forward to next month’s Boston Web Performance Meet-up!
Filed Under: Performance Monitoring
Tags: bandwidth , cache , full stack , mysql , technology