When AppNeta acquired Tracelytics, we had a big vision: bring together the best application, end-user, and infrastructure performance data available from across our varied probes and products. There were of course a lot of product challenges: what key metrics to borrow from each source, how to best present the data, and what personas to tailor it for.
On the engineering side, however, the challenge was perhaps even more interesting: to scale up an ever-growing engineering organization while trying to integrate data from two largely-monolithic apps. We didn’t want to just port technology from one place to another, duplicating business logic and fancy processing–instead, we wanted to build out a system of resources on which our next-generation performance monitoring could stand.
We wanted the following properties:
- Reusable sources of data, not tied to any upstream codebase
- Allows interoperating pieces of code written in several different languages / runtimes
- Leaves a path for engineering organization to deliver enhancements in spite of increasing complexity
- Performant and scalable operation
To meet this challenge, we turned to a service-oriented architecture. While there were already a few instances of this pattern in our codebases, we hadn’t formally adopted it as a methodology. Now, following the leads of companies like Amazon, Netflix, and Hubspot, we’re breaking out more and more new projects into standalone services.
It’s not all roses–a few caveats below–but the approach has a lot of benefits that have even been called out by Amazon in investor relations calls as drivers of their strong engineering organization.
Toes in the water
How to get started on this path? The route you take will be idiosyncratic, tailored to your application and the business drivers behind the decision, but here’s a two examples to get you thinking:
- Account state and credentials — if your services rely on maintaining authenticated users and account information across multiple applications or service tiers, this is a natural choice. Account state is rarely as simple as reading a few database columns; there’s often business logic on top that you don’t want to duplicate or have mismatched across projects.
- More broadly: data accessors. Business-level reads of information can worry less about the underlying storage, making these a good candidate for standalone service.
- Search & Recommendations — specialized components of business logic may best be implemented in other languages and may need their own resource pool in which to scale horizontally. Additionally, decoupling these from blocking request success can ensure your application meets performance standards even when a backend service is slow or down.
- More broadly: resource-intensive or specialized services. Proxy directly to the service from the web application’s frontend and you can make the services less blocking from a user experience point of view.
A few things to keep in mind
- Concurrency – Some microservices are compute-bound, but most are I/O-bound. This is particularly true of data accessors. Consider using an event-based model like node.js, EventMachine, or gevent/tornado in order to maximize concurrency with low RAM overhead.
- Observability – When breaking out a microservice-oriented architecture, using a standard and well-supported RPC transport (like a RESTful HTTP service) will yield good dividends. Everybody has a built-in client–curl, their browser, etc–and you can use a tool like TraceView to monitor your requests as they cross API tiers.
- Codebase overlap – Ideally, your microservices do not share any non-library code with each other. Reusing common codepaths between microservices can cause testing and compatibility dependencies that will slow down your team and introduce risk to projects. If you find yourself having to duplicate to break out a service, that can be a sign that services should be joint, or that common code needs to be replaced by boilerplate upstream.
Whether you’re starting fresh with a new project, trying to break apart an application that’s gotten unmanageable, or joining up previously separate functionality, microservices can provide a solid framework for scaling everything up.