When people start talking about DevOps, the idea of metrics usually comes along for the ride. To be able to monitor software after release, we need to know what data is important to us. There are so many options, it may seem overwhelming to know where to look. However, we can limit our options based on two key factors: what decisions we’ll make and how customer-focused they are. With that in mind, I’ll share what I believe to be the five most important DevOps metrics.
Metrics Are for Decisions
The thing about metrics is that they’re useless on their own. People often say, “We need to track this data!” But you need ask them only one question: what decisions will you make with that data? You may be surprised how often—usually after some mumbling—the answer is “I don’t know.” Any metric that doesn’t support a decision or set of decisions we may want to make ahead of time is simply noise. We want to eliminate noise from our minds and focus on what guides our decisions for our team.
Customers First, Then Everything Follows
Knowing what decisions our metrics will support is a good start, but it’s not enough. There are millions of decisions we could make about what we’re seeing. We need a North Star, a guiding light, that will be the anchor from which we can derive a strong set of metrics. This anchor is our customers. For any metric we use, we should be able to point back to how it helps our customers. After all, we ultimately owe them our existence.
Top Five Metrics
Without further ado, I give you the top five DevOps metrics you probably should measure for your team:
- Customer usage
- Highest and average latency
- Number of errors per time unit
- Highest lead time
- Mean time to recovery
Customer Usage
The first metric on our list is customer usage. This is any measurement that tells us how much our customers, internal or external, are using our features. When delivering new or enhanced features, it’s important to get to production as soon as possible. But we can’t assume customers want or will use a feature just because we put it in production. This is true even if they specifically ask for the feature. We can weigh how popular a feature actually is against how popular someone claimed it would be or what we estimated it would be.
It’s helpful for us to know how often customers use a feature—even one they requested—after we release it to production and inform them of its existence. Customers often think they need something “right away.” This can cause us to scramble, putting this feature on the top of our backlog. The feature might then sit, inert, for weeks or months because the customers reprioritized their desires.
Internal customers commonly are on a longer cadence, unable to use the feature until they get to it in their own backlog. Tracking customer usage allows us to say, “I know you said this is really urgent, but the last time you said that, it took you six weeks to start using it. Please be sure this is as urgent as you say it is.” We can also use this data to enhance the feature, watching usage go up or down, using hypothesis-driven development.
A good application performance monitoring (APM) tool can track this metric for you. It usually comes in the form of request counts or percentage of traffic.
Highest and Average Latency
Knowing how often customers use your features is a great start. But how do we know if customers are delighted or frustrated with our applications? This is a hard question to answer, but our next metric can hint to us that customers may be frustrated. One of the leading causes of frustration is an application’s slowness. When the response time—that is, the latency—is too high, customers are likely to go elsewhere for their needs.
We want to give our applications the best chance to make customers happy. They’ll appreciate it and likely stick around. If you have internal customers, it may be tempting to say, “They have to use my application, so I don’t need to worry about latency.” Putting aside the potential ethics issue of not caring whether your users have a pleasant experience, that mindset is folly. Even if your direct customers are internal, it’s likely that they or a downstream app are responding to external customers. So, slowness for them is still ultimately hurting your organization’s success. Even if this isn’t the case, enough complaints to the right people may get your applications scrapped.
Two major signals to look for when measuring latency are average latency and the slowest five percent or so of requests. Looking at the average gives you a nice bird’s-eye view of the application as a whole. But even one feature or subset of requests can be enough to create disgruntled customers. This is why it’s also important to keep an eye on your slowest requests.
We can decide where to tune performance with this information. An APM tool can handily monitor all of this for you, in addition to usage.
Number of Errors Per Time Unit
In the same vein of finding out whether our customers are happy, we have the metric of number of errors per time unit. The benefits of this should be pretty clear. Errors with high business impact not only cost your organization money, but they can erode customer trust. Looking at our error rates help us nip these in the bud and find abnormalities that even our tests can’t prevent.
Note that I said “errors with high business impact.” Not all errors are created equal. Your error metrics should differentiate between types of errors. Small glitches and errors are unlikely to erode customer trust or cost a lot of money. For example, if the screen is green instead of blue, that usually won’t be a problem for most people. Also, some errors are caused by users and should be expected. User errors are still good to track because they can provide information about how hard a feature is to use. Just be sure to keep them separate in your monitoring tool.
With this metric in hand, we can decide where to enhance our resiliency. If we can’t control the source of an error, we can decide to escalate that error to the appropriate team. For user errors, we can decide where to focus our efforts on increasing usability.
APM tools are also a great fit for this metric.
Highest Lead Time
Ideally, the work you deliver in your team is set up as a value stream, creating a flow of work from inception to customer usage. This lets us easily identify the individual steps it takes for a piece of software, usually a user story, to reach the customer’s hands. Think of it like an assembly line, but for software features. It’s helpful for us to look at the lead time that a user story takes to go through each step. This helps our customers by increasing the speed by which we get features into their hands.
If we adopt a Theory of Constraints approach, there’s always one highest lead time in our value stream. If we keep finding and reducing that highest lead time, we’ll be ever faster in our ability to deliver software. Say, for example, our value stream has a “coding” step and a “QA testing” step. We can record each step as part of a Kanban board and record which user stories are in “coding” versus “QA testing.” At the end of our iteration, we may see that cards sit in “QA testing” for three days on average, whereas cards sit in “coding” for only two days. “QA testing” is our highest lead time. We can then inspect why it takes so long to do QA testing and make improvements from there.
Lead time comprises two factors: process time and wait time. Process time is the time someone is actively doing something with the user story. Wait time is how long the user story sits idle, finished from the previous step and waiting to be picked up by the next step. Knowing both of these values separately will help the team know what actions they can take to improve the lead time. The decisions you may take on this are varied, but it’s good to have a system in place to frequently inspect and adapt to this metric. A sprint retrospective is a great example of such a system. And, as stated earlier, a Kanban board is a great way to track this metric.
Mean Time to Recovery
The final metric, mean time to recovery, is somewhat of an extension of our error count metric. While it’s good to know how many errors we’re getting, it’s also important to know how fast we can resolve these errors. This goes back to business impact. Business impact is a function both of how often we receive an error and how long it takes to recover from that error. One error that lingers for minutes could have more impact than 20 errors that last only a few milliseconds.
Having both of these metrics will give us a good line of sight on our business impact on errors. This metric is also a good indicator of how equipped your team is to handle operational issues. It’s an often underinvested portion of a team’s tooling.
We can use this metric to decide where we want to improve our insight into our application, such as by adding more logging context. We can also use this metric to help us decide how to simplify our architecture or make our code more readable.
Many tools specialize in error tracking to make it easy to see how quickly the team resolves issues. Some APM tools also have error tracking features.
Strength in Measurement
The key to good measurement is to understand what decisions we’ll be making. These decisions will be most effective when we center our customers. Drawing from this, we can derive a set of strong metrics that ensure our team operates at its best. With these metrics, no challenges will stand in our way for long.
Author: Mark Henke
Mark has spent over 10 years architecting systems that talk to other systems, doing DevOps before it was cool, and matching software to its business function. Every developer is a leader of something on their team, and he wants to help them see that.