Thorough testing is the only way to ensure that apps work as intended. We cover 4 software testing levels your app should go through from planning to release.
Software testing life cycle: testing phases every mobile app goes through
Excellence, then, is not an act but a habit.
This famous quote points to the importance of adhering to a consistent, repeatable process to ensure the highest quality results.
In mobile app testing, the repeatable process we rely on is called the software testing life cycle (STLC).
The STLC is a framework used to systematically and consistently test software throughout its development life cycle. Because it’s repeatable, it enables teams to detect defects faster, ensuring that apps are completed to specification.
So, how does the STLC work? Let’s take a closer look.
The software testing life cycle is divided into six phases. Each has distinct entry and exit criteria, or in other words, its own prerequisites and deliverables, respectively.
- Requirements analysis
- Test planning
- Test case design
- Test enviroment setup
- Test execution
- Test closure
Here’s what each of these phases consists of.
Requirement analysis
The first step in testing software is determining what you’ll test and how. This is requirement analysis in a nutshell.
In this phase, testers analyze the specifications and requirements of the software to determine the testing goals of the project.
They then brainstorm which types of functional and non-functional testing, including manual and automated approaches, can best achieve these goals.
Here is a depiction of requirements analysis:
The entry criteria for this phase is the software requirements specification (SRS) document, a high-level description of the purpose and requirements of the project.
Once the testing team has a general idea of the project, they can then begin defining the scope.
This is where testers determine what testing procedures to implement at every phase of the software development life cycle (SDLC), such as smoke testing or user acceptance testing.
Teams also need to specify the prerequisites, environments, and test cases to be used.
One of the vital exit criteria for requirements analysis is the requirement traceability matrix (RTM). This document lists down all software requirements as defined by the testing team.
When the quality assurance (QA) team adds a new test case or method in later stages, it’s then mapped to one of the requirements in the matrix.
With such a document, it’s easy to see if any requirements still need verification with a test case. It’s also bidirectional, allowing you to trace a requirement to a feature and vice versa easily.
An RTM looks similar to this:
If the QA team plans to use automated testing methods, they also need to conduct an automation analysis. This is where the team studies the testing requirements and processes to determine whether automation is feasible or not.
They then compile their findings into an automation feasibility report, which will be subject to management approval before automated testing can proceed.
If you’re unsure whether you need automated or manual testing methods, you can get an excellent overview of the topic here.
Test planning
In the test planning phase, the QA team gets into the nitty-gritty details of the project, listing the actual tasks and steps for each testing procedure.
The test planners will also devise the schedules, test roles, software tools, and resources needed to achieve testing goals.
Think of the test planning phase as creating the blueprint for your software’s testing regimen.
It’s a critical document because everyone in the project, from stakeholders to the developers, will refer to it to know how software testing will proceed for every part of the project.
Because of this, the test plan is often created in collaboration with members of the project outside of the QA team. The document also tends to get regular updates to ensure testing keeps up with any changes in the software requirements.
Here’s what the test planning phase looks like:
The entry criteria are, of course, the RTM and updated requirement documents from the previous phase. These give QA planners the information they need to design test methods that cover every requirement listed.
The first step of test planning is to define the objectives and deliverables for each procedure. That also includes the tasks, testing tools, scope, and testing environment needed.
Parameters such as benchmarks and testing exit criteria are also essential for establishing the ground rules of each test.
Here, it’s important to mention defect management, which defines how (screenshots, logs) and where (a tool or a specific person) to properly report bugs. If defect management is mishandled, bugs can either go unnoticed or be fixed incompletely.
Other vital parts of test planning include estimating the effort and resources required for testing. These two steps help you determine the hours and cost of every testing procedure.
Doing this is helpful so that the testing schedule and resources are aligned with the rest of the project.
After test planning, the QA team should have a comprehensive test plan document. This also serves as this phase’s exit criterion.
Test case design and development
With the test plan document complete, there’s just one phase left before the actual testing begins—designing test cases. Here’s what this phase looks like:
A test case is a series of repeatable steps that test a software feature for a specific outcome with distinct inputs. This is different from a test scenario, which only defines the feature or functionality you want to test.
In other words, to test a particular scenario, you often need to run one or more test cases for it.
Here’s an example.
Suppose you want to test an app’s login page (the test scenario). In that case, the test case might be to enter an invalid username and password so you can see how your login page handles errors.
Another test case would be to determine what happens if you submit the login page with blank fields.
The goal of a test case is to check the expected output against the actual results. If there’s a mismatch, then the test case is considered unsuccessful, and the defect is reported.
For example, the expected output of entering an invalid password in the login system is that the system informs you of the error and prevents you from logging in. If the system does anything other than this, then the test case is deemed a failure.
One major challenge of test case design is that you need to consider every possible permutation to achieve 100% coverage. But at the same time, each case must also be unique so you don’t waste time testing more than you should.
So, here’s an example of a test case. You can download a template here.
Source: Smartsheet.com
It’s essential to be clear and transparent when devising test cases. Don’t be afraid to simplify the steps as much as you can. Doing this ensures that testers perform every action as intended with zero room for interpretation or ambiguity.
Aside from detailed procedures, test cases should list down the specific conditions of conducting the test. This gives your testing a reasonable degree of control and repeatability.
For example, a login page test case requirement would be that the user already has a valid account.
The exit criteria for this phase are the actual test cases, which you’ll then deploy to testers for execution.
Test environment setup
With the first three planning stages of the STLC complete, we now move on to the actual testing. Kicking it off is the test environment setup phase.
Just as developers need equipment and tools to code the software, testers also require a proper setup to test it. To run tests efficiently, they must have the right servers, devices, testing tools and frameworks, and the proper network configuration.
For example, to properly test a mobile app, testers must be using an Android or iOS device that meets the minimum memory and processor specifications for that app. Likewise, if evaluating a website, the QA team might require a specific browser.
In many ways, setting up the right environment is the most crucial part of testing. It helps provide a controlled environment to ensure your tests are consistent, predictable, and repeatable.
Without it, teams can’t carry out test procedures, or they run the risk of producing inaccurate results.
Here’s a snapshot of this phase:
The bare minimum of any test environment is to have the underlying hardware in place. This includes individual computers installed with an operating system compatible with the target software.
Ideally, testing should also have a dedicated server and database to collect results.
Next, determine any software tools needed for testing. If you plan to incorporate automated testing platforms like Selenium and Appium, ensure they’re adequately configured in every testing device.
Also, don’t forget to prepare relevant documentation and checklists. Examples include printed test cases, manuals, and requirement documents.
Lastly, your testing environment must pass a smoke test. This type of so-called gatekeeper testing quickly checks if something is ready for further testing.
In the case of your test environment, a smoke test determines if it’s stable enough to move on to further testing stages.
Before we end this section, note that there are different kinds of testing environments:
Types of test enviroments
Source: Test Environment Management
Each of these testing environment types has drastically different requirements.
For instance, chaos testing involves testing the resilience of your software, so the environment needs to be scalable and available 24/7. Security testing, on the other hand, often requires complete isolation.
Test execution
We’ve now come to the main event—the execution phase. Here is where testers perform test cases, validate the results, and report any bugs.
Here’s what the test execution phase entails:
We’ve already defined the test plan and test cases. The next entry criterion, the build, refers to the software that the QA team will test. Developers often package it into an installable form for deployment into the testing environment.
Before the main testing itself, QA teams often subject the software build to preliminary tests to ensure it’s stable enough for more extensive procedures. If it fails, then it’s rejected by the QA team.
This step ensures that testers don’t waste time evaluating something with too many significant bugs in it.
The two key exploratory tests used frequently include smoke and sanity testing. If the build passes these, QA testers take over, and proper testing officially begins.
Testing often occurs in two to three cycles, each executing all of the test cases involved. The early cycles are designed to look for critical bugs or blocking issues. The latter kind is especially problematic since they prevent you from completing your test cases.
After executing each test case, the tester records the actual outcome and compares it to the expected result.
These findings are compiled into a test case execution report and defect summary report, complete with a description of the encountered defect, screenshots, and system logs, as specified in the test case document.
The test execution phase continues until all test cases are passed, and every defect has been closed or resolved.
At this point, the testing is more or less complete, but it’s still a good practice to do a final regression test. This step checks whether bug fixes during testing might have introduced a new defect or unwanted changes in functionality.
Test closure
Test closure is the final step that formally ends the STLC. The team completes all testing, archives all test documents, and hands over the results to maintenance.
Here’s what the test closure process looks like:
Test closure begins when all test cases are completed and results are submitted to the QA lead or test manager. Every bug detected during testing should also be resolved at this point.
The critical task for this phase is to ensure that all deliverables (test results, RTM, scripts, test plan) are complete and finalized. These will then be sent to key stakeholders, notably the maintenance team.
A critical part of test closure is the post-mortem analysis of the completed test procedure. Teams discuss things that went wrong to avoid similar mistakes in the future and establish best practices that they can repeat.
That’s why it’s also vital to archive all test documents and reports. By doing this, procedures can be repeated in the future, even by an entirely different testing team.
Testing is the key to fantastic apps
As you can see, testing includes more than just looking for bugs. It’s a rather involved process that’s an entire discipline in itself.
But with today’s users demanding the safest and highest-performing products, testing is the key to creating profitable, groundbreaking mobile apps.
At DECODE, we take our testing philosophy seriously. That’s why we’ve developed a process that ensures every project we deliver adheres to an exceptionally high standard.
Interested in working with us? Contact us today, and let’s bring your next app idea to life.