There’s been a lot of talk about Mobile DevOps lately; teams generally understand the benefits and are motivated to use it as a framework to improve their overall mobile practice. But actually measuring how your team is performing on the path to DevOps greatness can be extremely difficult to do accurately — or at all — and, as a result, it’s equally difficult to iterate and improve on your DevOps practices themselves.
The problem is that, even though there are well-regarded ways to quantify DevOps performance (e.g. DORA metrics), they require you to continuously crunch numbers on a lot of different data, spanning multiple tools and sources. Calculating things like failure rates, lead and cycle times, and time to recovery involves piecing together info from source control, your project management platform, the app stores, observability tools, and more. Doing this well calls for more than napkin math, and for metrics that are regularly refreshed.
Improve your team’s Mobile DevOps practices, powered by Runway
Because Runway sits a level above the rest of your stack, we’re uniquely positioned to help here. Pulling together inputs from your entire toolchain and continuously crunching all the numbers for you, Runway will now surface a range of key Mobile DevOps metrics that allow your team to understand how you’re performing and help identify areas for improvement.
The entire development cycle is in scope: Runway analyzes items of work stretching as far back as first ticket creation and, crucially, integrations with the app stores allow for more granular and accurate insights on timing at the tail end of cycles as well.
Plus, certain metrics surfaced by Runway reflect the unique setup of your workflows and release process in the platform, giving you tailored insights into your team’s own way of working.
Read on for an overview of the different measures of Mobile DevOps performance that you can now keep tabs on with Runway!
Release failure rate
How often do you ship bad releases?
Runway looks at multiple factors to determine whether a release went bad, like whether a hotfix was issued afterwards, if your team’s configured health metrics went “unhealthy” during rollout, or if you halted and never resumed a phased rollout.
Time to recovery
If you do ship a bad release, how quickly do you get a fix out?
For any release considered “failed” (see previous metric), if your team decides to issue a hotfix, Runway will measure how long it takes to get that hotfix release out the door.
Lifecycle timing
For a given item of work, how much time is spent each step of the way from inception to release?
Runway tracks work end-to-end – from first ticket, to code written, to code merged to trunk, to release. This gives your team a complete and granular understanding of how long it takes to get features and fixes out to users, and where bottlenecks exist.
Release frequency
Weekly? Biweekly? All over the place?
Runway will keep tabs on your release cadence and help ensure your team is shipping as often as you want to be.
Release step timing
How long does each part of your release process take?
To help you zoom in on and improve your team’s release process, Runway captures time spent every step of the way so you can pinpoint slowdowns and ship more efficiently.
Checklist and approvals completion times
How long are sign-offs and action items taking?
Runway can also help you track performance around the unique parts of your team’s process. For checklist and approvals items, Runway surfaces info on how long those take to get actioned by your team.