The cost of fixing a bug post-release is at least two to three times higher than fixing it in the testing phase.
Regression testing — which involves making sure that new features added to your application don’t break existing functionality — is a key strategy employed by high-performing mobile teams to help mitigate the risk of post-release bugs by catching issues earlier in the development cycle. However, as the scope and functionality of an app grows, regression testing can start to demand significant resources — especially if testing is performed manually — which can lead to bottlenecks in the development and release pipeline.
For this reason, automated regression testing has become an attractive option among mobile teams tasked with pushing out regular releases with minimal regressions. When implemented properly, it offers a middle ground that enables teams to run through large test suites in less time and with relatively fewer human resources.
But, there’s a case to be made against over-reliance on automation, especially in high-stakes release pipelines. Building an automated testing stack comes with significant overhead, and can be costly to properly maintain. Automated regression tests are also a lot less flexible when it comes to complex user flows and tend to be difficult to implement reliably in certain situations.
Instead of simply using automation as a one-size-fits-all solution for regression testing, your team needs to understand what should be taken into account when considering investing in automated testing, and prepare a holistic strategy that balances the use of automated and manual testing approaches as needed.
Today, we’re going to compare the pros and cons of automated and manual regression testing to help you prepare a strategy that strikes the right balance between automated and manual testing so your team can benefit from the best of both worlds.
Why regression testing matters more in mobile development
Regression testing is a form of software testing that ensures that changes made to an application do not have unintended consequences on the existing functionality of the system. It can happen at any point in the software development lifecycle, ideally before finalizing your release candidate and pushing it out for release.
Regression testing is most commonly part of the final stage of quality assurance, right before the release candidate is submitted for review and pushed out to app stores. This ensures that the final build being released to users has been fully tested and is free from defects. While this approach has its benefits, it can also result in a more time-consuming overall testing process if lots of fixes are needed leading to many iterations of regression testing being required for any given release.
With the rise of the shift-left model, mobile teams are opting to conduct more of the testing process during the earlier (or “left side”) phases of the development lifecycle. In practice, that can mean running regression testing suites every time a new feature is merged into the main working branch, rather than waiting to perform regression testing as part of the release process.
Regardless of which approach you plan to take, regression testing is a particularly vital step for mobile teams because of the peculiar constraints of third-party app releases. Here are a few reasons why:
Gated releases
Mobile updates aren’t real-time. Worse still, app stores don’t allow users to downgrade to a previous version of your app if something goes wrong with an update. This makes every new update for iOS or Android especially high-stakes.
Fragmented environments
When it comes to deploying mobile apps, there’s a host of different hardware configurations and operating systems to account for in users’ devices. This inherent fragmentation makes regression testing — especially the manual kind — even more important, because of the variability involved in the real world usage of a mobile app.
Fickle users
According to a survey by Localytics, 21% of the time that mobile users download new apps, they abandon them after first use. That means a single botched release can lead to a marked increase in abandonment rates, whereas consistent release quality can help greatly in boosting long-time user retention.
Manual vs automated regression testing: what’s the difference?
Manual. Automated. Hybrid. Which regression testing techniques should you adopt for your mobile org?
When discussing regression testing, it’s important to understand that the choice between manual and automated isn’t a binary one. In fact, teams often tackle the process using a healthy combination of both automated testing scripts and manual testing, in an effort to cover as many test cases as efficiently as possible.
Let’s take a look at the advantages and disadvantages of both manual and automated regression testing to get a better understanding of how to combine them to develop a larger regression testing strategy.
Manual regression testing
Manual regression testing typically involves QA team members executing a set of test cases to ensure that code changes, feature additions, and bug fixes don’t break existing features of an application. It’s done by identifying relevant test cases that cover key parts of the application’s functionality, executing these test cases manually, and reporting any defects found to the relevant teams.
In a typical workflow, QA testers are given a spreadsheet with a list of all test cases that need to be run. Each test case comes with an accompanying set of instructions (sometimes referred to as a “script”) that details the steps to be taken to run the test and the expected outcome of those steps. QA teams manually go through each test case and note the results to see if they match the desired outcome. If not, they’ll create a ticket to raise the issue with the right team.
Pros:
- More reliability and flexibility: When there’s a human brain piloting the regression tests, there’s a lot more flexibility in terms of how the test script can be executed. Moreover, human teams can execute tests with complex flows and branching pathways without the fear of flakiness, unlike an automation platform.
- No reliance on third-party platforms: With manual regression testing, your team can reduce reliance on third-party testing platforms and eliminate a lot of the overhead associated with managing and maintaining those platforms.
- Quicker turnaround for new tests: New test cases are much easier to spin up with manual testing compared to dealing with the overhead that comes with implementing an automated script. That’s because manual testing doesn’t require a huge amount of upfront resources to set up initially.
Cons:
- Human error reduces accuracy: When you’re dealing with a large regression test suite, a manual approach can become a time-suck that quickly leads to tester fatigue, increasing the likelihood of human error and reducing accuracy in the process.
- Leads to slower release cycles: Manual testing is more time-consuming than the automated approach, which can lead to bottlenecks in release pipelines and a higher time-to-market for new app updates.
- Can be more expensive to scale: Because manual testing involves dedicating human resources to an ongoing task, it can be expensive to adopt and to scale. As your test suite grows, more resources need to be brought in to execute additional test cases without compromising on the efficiency of your overall process — this can further increase costs.
Automated regression testing
Automated regression testing often becomes necessary for teams with larger test suites where manual testing may be time-consuming, error-prone, and costly. It involves the creation of automated test scripts that can be run repeatedly in a machine environment to verify that the software functions as expected. These tests are often run on a nightly basis or triggered on demand when a new version is nearing release. Tools to help with building out and maintaining automated testing scripts are widely available with a variety of features to cater to the testing needs and priorities of different kinds of teams.
Pros:
- Much faster than manual testing: Machine environments are capable of executing test cases much more quickly than humans can. Additionally, automated test suites can be parallelized across workflows or using device farms or multiple virtual devices which further increases the overall speed of execution
- Scales easily with complex applications: Automated regression testing is more scalable than the manual approach because it can allow teams to expand test coverage without significantly impacting execution time and expending more human effort in the process.
- Reduced ongoing cost of execution: Whereas manual testing requires constant resources so that tests can be executed repeatedly, automated tests require limited maintenance once you deal with the initial overhead of writing the test scripts.
Cons:
- Large initial investment and overhead: While it leads to less time and resources spent testing the application in the long run, automated tests require mobile teams to devote significant effort towards the initial implementation and setup of the automation stack and test scripts.
- Slower to add new test cases: Implementing new test cases (or updating existing ones) comes with significant overhead that makes it more difficult to turn around changes quickly. This makes automated tests less flexible than manual test scripts.
- Increased build times in CI/CD: Automated regression tests are usually carried out as part of the CI/CD build pipeline, leading to higher build times and increased costs, even though the tests themselves are actually running faster than in a manual testing environment.
Creating a balanced regression testing strategy for mobile teams
There are pros and cons to both automated and manual testing approaches, which is why teams should opt to have a hybrid strategy that balances automated test execution with human testers to capture the best of both worlds. By evaluating your testing approach on a case-by-case basis — looking at each test case and deciding what works best in the given scenario — you can build a regression testing strategy that’s optimized for the features being tested.
When building out a regression testing suite, it can be a challenge to make tests reliable, accurate, and fast. And, if your team needs to run a large volume of test cases, the natural inclination might be to try to automate as much as possible. However, there are still cases where a manual regression testing might actually be a more pragmatic approach. Let’s talk about some of the key considerations for deciding whether a test case should be automated or not.
How critical is the feature being tested?
If a given feature is critical to your mobile app’s core functionality, it might make sense to undertake a thorough manual testing process as an added layer of security beyond relying entirely on automated platforms to test that feature.
This is because while automation usually helps improve accuracy when executing large amounts of test cases, it can have the opposite effect when targeting a single test case of considerable complexity — and any false negatives (where the automated regression test is marked as a “pass” when it in fact should have failed) could have a devastating impact on your users and core business.
To increase confidence in the result of automated test cases for high-impact flows, consider adding an extra layer of manual testing (sometimes referred to as “sanity checks” or “smoke tests”) to decrease the likelihood of testing errors in the most important parts of your product.
How easy is the test case to define?
Generally speaking, it’s much harder to write automated tests than to run a test manually. Each test case needs to be broken down into a detailed list of individual commands that an automation framework can easily execute. Tests need to take into account preconditions for getting the application to the initial testing state, and consider how the transitions from state to state might impact the outcome of the test. And, if your app performs transactions with external APIs, it can be difficult to set up a testing or sandbox environment against which to execute automated tests in a repeatable way.
All of that makes automated tests harder to define, which is why it’s important to consider the complexity of any regression test cases you pick for automation — the more complex the test case, the more difficult it will be to write a reliable automated test. For this reason, a good rule of thumb is that if the test can lead to different situations that require human judgment to navigate, it’s better suited for manual testing.
Example 1: Simple, better suited for automation testing.
Test Goal: Verify the checkout process.
Description: Verify the checkout process of a mobile ecommerce app after a recent update to the application code.
Pre-conditions: User should be logged in, have items in the cart and proceed to checkout.
Steps:
- User adds item(s) to the cart.
- User proceeds to checkout.
- User enters shipping details.
- User enters payment details.
- User verifies the order summary and confirms the order.
- User verifies the order confirmation page.
Expected Result: User should be able to complete the checkout process successfully without any errors.
Test Environment: Android and iOS devices
Test Data: Valid user credentials, valid shipping and payment details.
Example 2: Unpredictable, better suited for manual testing.
Test Goal: Verify the app behavior when the internet connection is lost during the checkout process. (Complicated)
Description: Verify the behavior of the mobile ecommerce app when the internet connection is lost during the checkout process, following a recent update.
Pre-conditions: User should be logged in, have items in the cart and proceed to checkout.
Steps:
- User adds item(s) to the cart.
- User proceeds to checkout.
- User enters shipping details.
- User enters payment details.
- Internet connection is lost.
- User verifies the app behavior.
- User reconnects to the internet and reconfirms the order.
Expected Result:
- User should be alerted about the lost internet connection and prompted to reconnect.
- User should be able to reconnect and continue with the checkout process from where they left off.
- User should be able to complete the checkout process successfully without any errors after reconnecting.
Test Environment: Android and iOS devices
Test Data: Valid user credentials, valid shipping and payment details, internet connection.
How often is the test case changing?
As your mobile app continues to add new features and iterate on core functionality, it becomes necessary to go back and adapt test cases to the new behavior.
Test cases that are likely to need to be updated frequently may not be best suited for automation due to the additional overhead associated with updating the automation script each time you revisit the test case after an update. Features that are changing often benefit from the flexibility that manual testing affords, which enables teams to adapt quickly to changes in a cost-effective way.
To help identify the test cases that are frequently updated (and therefore less appropriate for automation), you can use something called test case version control to track all the times that you’ve had to make changes to an existing test script due to a new feature addition or bug fix.
Test case version control is the process of maintaining a version history of your test cases as you revisit and rewrite them throughout the development process. It involves tracking changes made to test cases over time, assigning specific version identifiers to each record every time it changes, and retaining full historical records about test activities carried out.
By looking at the version history of a regression test case, you can determine how often your test flow has changed in a given period of time. That can help you decide whether to pursue automation for that test case, or whether to wait until the feature is more mature and less likely to change.
How reliable is the automated test going to be?
Automated tests may run faster, but they are also vulnerable to testing errors (aka “flakiness”) under certain conditions. Flakiness in test automation refers to inconsistency in test results, where the same test may pass or fail without any changes to the code or the test itself.
While there are multiple factors that may contribute to flakiness in automated regression tests, it’s important to pay special attention to tests involving things like in-app transitions and hardware virtualization, both of which tend to make writing reliable automated tests particularly difficult. It’s important to take them both into consideration when deciding whether to perform regression testing using an automated or manual approach.
- Timed transitions: Automated regression tests often rely on the ability to interact with the app's UI elements in a consistent and predictable manner. Transitions and complex animations can introduce variability in the timing of these interactions, making it difficult for automation frameworks to accurately interact with the elements in the app interface.
For example, if an element is only visible for a short period of time during an animation, automation frameworks may not be able to interact with it before it disappears. This can result in false positives or false negative test results, leading to unreliable test runs. - Hardware virtualization: While most automation frameworks are designed to work with multiple mobile devices and hardware configurations, there are cases when automated regression testing tools may be insufficient at virtualizing the necessary hardware for testing purposes.
For example, if a test case relies heavily on features like Bluetooth, the camera, or a gyroscope, it can be difficult or impossible to replicate those hardware configurations in a test environment. That can lead to flaky results, making the test case a much better candidate for manual testing using dedicated devices in the hands of engineering or QA.
How much can you save by automating the test?
Finally, mobile teams need to consider the business side of things to determine whether a test case should be automated or not. Without calculating the ROI of test automation, you’re flying blind as to whether automation is a financially sound investment for your team.
While it’s true that test automation generally helps reduce resource usage and increase testing efficiency for your team, automating the wrong regression tests can easily lead to a resource drain since establishing and maintaining both automated test suites and their underlying infrastructure can be a massive investment in itself.
To calculate the ROI of test automation, you need to estimate the cost savings from automation and compare it to the total cost of maintaining your automation framework. Here’s a simple formula that you can use to calculate automation ROI:
<code>ROI = Total Savings / Cost of Automation<code>
Here, Total Savings refers to the amount saved by running the test case in an automated way instead of running it manually for a given period of time. Meanwhile, Cost of Automation refers to the cost associated with establishing and managing the automation stack and the necessary scripts.
Visibility is the key to successfully incorporating a regression testing strategy into the dev and release workflow
When mobile teams approach regression testing, they’re often quick to jump to automation as the most obvious approach. But it’s important to recognize situations where going the automated route can actually be counterproductive: Instead of rushing to automate your entire regression testing suite, evaluate test cases on an individual basis to determine which approach they are better suited for. Creating a balanced regression testing strategy is a key part of building a streamlined and efficient development workflow — and an important ingredient of your overall Mobile DevOps strategy — helping you optimize your process to allow for a fluid and iterative workflow while minimizing the risk of costly post-release bugs.
However, equally important to developing a balanced regression testing strategy is successfully incorporating it into the broader development and release workflow of your mobile org; and for this, proper visibility into your regression testing strategy — and where it fits into your overall development and release process — is a must-have. That means having a clear line of sight into the scope of your regression testing suite, knowing which team members are assigned to what test cases, and tracking historical outcomes of test runs to more easily identify unreliable and inconsistent tests. Many mobile teams struggle with providing this visibility, making the regression testing process a black box that only a select few on the team feel equipped to manage.
For mobile teams shipping regular releases and running manual regression testing as part of this process, having visibility into your team’s test plan, being able to track progress on test cases, and having a historical record of the outcome of test runs across releases is critical towards making regression testing more transparent across the overall mobile dev and release lifecycle. Platforms like Runway can help surface regression testing as a key part of the development and release process: regression testing is prominently featured as an important step of the release process, with a focus on improved collaboration and visibility for the entire mobile team.
Want to learn more about how Runway can help improve your team’s release process so that you can dedicate more resources towards regression testing? Book a demo!