In recent years, DevOps has taken the engineering world by storm as a framework for building and shipping better software, more seamlessly. The mobile world has taken particular interest in DevOps lately because it helps address some of the unique challenges building apps entails. But as your mobile team moves to make adjustments to your development process to align with DevOps best practices, measuring the impact of changes is as important as investing in change to begin with — you can’t improve what you don’t measure. By tracking just a handful of metrics across your team’s entire process, you can gain a deeper understanding of how your team is performing, and build a data-driven framework to help you incrementally improve your mobile team’s DevOps practice.
Key metrics to help you track the DevOps performance of your mobile team
In the mobile world, we often find ourselves needing to make adjustments to concepts that the rest of the tech industry can use out of the box, and DevOps metrics are no exception.
Knowing what to measure is the first step towards developing a strong data-driven approach to improving any process. Luckily, we can lean on extensive research like that of the DevOps Research Assessment (DORA) team at Google, which has identified a number of key metrics that — when properly measured and tracked — can serve as a rubric for assessing how well your team is building and shipping product.
But beware: in the mobile world, we often find ourselves needing to make adjustments to concepts that the rest of the tech industry can use out of the box, and DevOps metrics are no exception. With some thoughtful tailoring, it’s possible to create a solid framework for evaluating your mobile team’s process, and for measuring the impact of improvements made over time — all in a data-driven way.
Change lead time
The time between a commit’s creation and when it’s deployed to production.
The fundamental idea behind change lead time is to measure the lifecycle of the simplest item of code work — a git commit. Change lead time measures the average amount of time it takes a commit to go from being created, to being deployed and available to users in production. In essence, change lead time is measuring the efficiency of your team’s development and release process.
There are a number of reasons the change lead time metric is interesting:
- It speaks to the process of getting commits into a deployable state. Things like your team’s branching strategy and code review process can affect how quickly a commit can go from being created to being merged into the working branch
- It captures the efficiency of your release process. Once a commit has been merged into the main working branch, the amount of time it has to wait before being released to production will be reflected in your change lead times
- It can surface differences across different types of work. With additional segmentation, you can get a sense for how change lead times vary depending on the type of work the commit contains. For example, the change lead times of bug tickets could be quite different from that of bigger features
Generally, shorter change lead times are a positive indicator, and you can find plenty of articles setting benchmarks of change lead times as short as a couple of hours for DevOps-mature teams. But as we know, when it comes to distributing apps, mobile teams themselves aren’t in control of distribution, since Google and Apple dictate if and how updates are released to users. Not only does this significantly slow down change lead times, the third party dependency affects how change lead times are calculated and makes capturing the true cycle difficult. Variation in app store review times across Google and Apple platforms and over time adds yet another complication, making change lead time values inconsistent and limiting their usefulness for tracking trends.
For all the reasons above, we recommend subtracting the time spent waiting for and in app store review from your change lead time calculations. This results in a more useful measure of end-to-end lead time for changes, removing factors that are out of the team’s control. Even after applying this tactic, remember that as a mobile team you’ll still see higher change lead times on average compared to many peers distributing other kinds of software due to the additional complexity of building and shipping binaries.
Cycle time
The time between a feature or change being requested and when that item of work is in production.
Whereas change lead time provides a measure of the efficiency of your team’s development and release process, cycle time takes a broader view to include the entire product development cycle: from the initial request for a feature or change, to the time it’s finally deployed to production and in the hands of users. Cycle time is what users “see” — for example, how long does a user have to wait from the time they report a bug in your app until a fix is out and in their hands? While cycle time overlaps with change lead time, it can also answers questions about earlier stages of the development cycle like:
- How efficiently are bug tickets triaged?
- How long do feature requests spend in a product backlog?
- If designs or additional product requirements are needed for a given item, how efficient is that process?
- How long do features spend waiting for development to begin?
- How long does it take complete development on an item of work?
Shorter cycle times are generally better because they’re an indicator of your team’s ability to triage, plan, scope, develop, and release work efficiently — ultimately delivering more value to users, faster.
Release frequency
The cadence at which your mobile team releases updates to production.
How regularly your team is able to release updates to the app stores — release frequency — is another measure that can speak to the overall maturity of your development and release process. Release frequency is similarly limited by the formal review process of the app stores, making multiple daily deployments — a DevOps goal for other kinds of software — an impossibility for mobile. When it comes to release frequency for mobile teams, a good benchmark to aim for is typically weekly or biweekly.
Among the various factors that can affect release frequency, an important one to call out is your team’s release style. For example, teams that release in an ad hoc way, whenever specific features are ready to ship, are less likely to have a consistent release frequency. By contrast, teams running regular release trains will see a more consistent release frequency correlating to the cadence of their release train — and teams that automate parts of their release train to help keep them on schedule will see that reflected in their release frequency metric over time.
Having a consistent release frequency means you’re able to deliver value to users more reliably and predictably. And having a higher release frequency means you’re able to see the impact of work and iterate more quickly as features and fixes reach users more efficiently — in fact, higher release frequency also means shorter cycle times. Generally, your team should work to increase release frequency as much as possible without compromising quality of the changes being introduced with each release.
Release failure rate
The percentage of releases that result in failure — however your team defines failure!
Besides capturing the efficiency of how your team builds and ships products, it’s equally important to have some measure of the quality of what you’re building and shipping. One metric that gets at this is release failure rate — that is, the percentage of releases that result in “failure” in production. How failure is defined will vary from team to team, but possible contributors to a release being considered “failed”:
- Any of your critical health metrics falling outside acceptable thresholds (think crash-free rates, averages around key use behavior like conversions or signups, or user ratings, etc.)
- Bug reports escalated by customer support or ops
- A phased rollout needing to be halted, for any reason, and not resumed
- A hotfix release needing to be issued
Calculating release failure rate for a given period is as simple as taking the total number of “failed” releases (by whatever measure your team thinks is appropriate) and dividing that number by the total number of releases.
Tracking release failure rate over time is a good way to keep tabs on quality, and it’s especially important to closely monitor any upward trends.
Time to recovery
The total time it takes from a release failure to when a fix is released
If there’s been an issue with a release, the amount of time between when the issue was first identified and when the issue is resolved is known as the time to recovery. This metric is essentially a measure of your team’s ability to quickly triage, fix, and deploy any issues that have caused a failure in production. There are a lot of factors that play into your team’s time to recovery, like what monitoring is in place to identify and alert on any issues, as well as your team’s process for triaging issues so the right team is tasked with issuing a fix. But at a high level, you ideally want your team’s release failure rate to stay low while your time to recovery is quick, an indication that critical issues are quickly identified, triaged, and resolved with minimal disruption to your users.
Because the time to recovery metric covers the time it takes to release a fix to users, we recommend subtracting the time a release spends waiting-for-and-in app store review from the total duration to get a more accurate measure of factors within your team’s control.
Advanced metrics: how to understand your team’s Mobile DevOps performance with more granularity
Beyond the five key metrics described above, it’s possible to measure and track even more granularly in order to get a more nuanced and complete look at the different parts of your development and release process.
Step cycle time
Every team’s process is made of a number of discrete steps — think of the product planning stage, followed by the development stage, followed by the testing stage, followed by the approvals stage and finally release. While the traditional cycle time metric will measure the average time it takes any piece of work to move through your entire development and release process, it can be valuable to zoom into each individual step to measure and track how long each step in your development process is taking.
How you choose to break up your development process into discrete steps that can be measured will depend on your team’s unique process — and on which steps you’re most interested in measuring — but consider the following as a good starting point:
Wait time: how much time on average is there between when a work item is created and when it begins development?
Work time: how long on average does an item of work spend in active development?
Deploy time: how long on average does an item of work that’s finished development take to deploy to production?
You can get even more granular by measuring the cycle times of discrete steps even within any one of these higher-level steps. For example, you may choose to focus on the efficiency of your release process (deploy time) by measuring the cycle time of each discrete step in the release process: creating the release candidate build, regression testing, and collecting any necessary approvals are all potential steps of the release process that can be independently measured and tracked to help identify specific bottlenecks in your process.
Step wait time
For any given step in your development process, an interesting metric to measure and track is “wait time” — that is, once the process enters any given stage, how long does it spend in a “waiting” state before the work associated with the step begins? For example, once a release candidate build has been created, how much time elapses between when the build is made available to your QA team for testing and when testing actually begins?
Tracking wait times and keeping them in check is important towards ensuring that things are moving through the various stages of your development and release process as efficiently as possible. Wait time is generally speaking, unproductive time — this is a period of time in which the process has stalled and doesn’t make progress. When certain steps have higher wait times, it could be a sign to dig into why that might be the case.
It’s possible to measure step wait times for any number of steps in your process, and at a high level the goal should be to reduce unnecessarily long wait times on any given step, so that you can feel confident that work is never stuck in a state where it’s not progressing through its intended pathway through your team’s process.
DevOps metric breakdowns at the team level
Certain DevOps metrics lend themselves well to understanding how effectively teams are functioning at the feature team level. Lead time and cycle time metrics for example can easily be segmented by the team that owns each item of work to yield lead time and cycle time values for each individual feature team. This can be valuable for understanding if there are any meaningful differences in how different teams are building and shipping product, surfacing possible opportunities to standardize and consolidate parts of the process across your entire mobile org.
Setting your team up for success with Mobile DevOps metrics
Investing in properly defining, collecting, and tracking mobile DevOps metrics over time can be a powerful way to measure how efficiently your team’s development process is functioning. Defining and collecting the necessary data to compute key metrics can be one of the most difficult parts of getting started with mobile DevOps metrics, in part because how they’re defined is unique to each team and their specific process, but also because many of the metrics touted by the general DevOps community need to be adapted to work well for mobile use cases. In this post, we’ve covered five key DevOps metrics to get you started, plus some advanced metrics for teams looking to gain a more nuanced view into their team’s performance. In a future post, we’ll do a deep dive on how to use all of these metrics to create meaningful improvements in your team’s process and evolve your mobile DevOps practice as a result.