
hen it comes to test reporting, colorful charts and graphs often take center stage. They’re visually appealing and can give you a boost when you see “80% success”. But there’s a danger in relying too heavily on these metrics.
Pass percentages and pie charts can paint an overly simplistic picture of your test results, without actually giving you a clear picture of what’s working and where there are critical issues with your product.
That’s why it’s so important to have transparency in test reporting. Instead of focusing on abstract figures, it’s more important to communicate the nitty gritty details – that means ensuring you know what’s been covered, what hasn’t, what’s working and what isn’t.
What’s the problem with metrics?
What’s our problem with metrics? Well, there’s nothing wrong with metrics per se. It’s useful to know what percentage of your tests have passed, and how many have failed – but those numbers alone don’t give you the true picture of where your product is at.
Pretty charts and graphs shouldn’t stop you finding out from what you really need to know: what’s passing and what’s failing. While visual elements are useful for helping you to see at a glance where the problems lie, they need to give you the details so you can clearly see what works and what doesn’t.
There’s also another danger: that metrics don’t communicate testing gaps.
Charts and graphs only show what has been tested, not what hasn’t – which could leave stakeholders unaware of potential blind spots. A report might show that all the tests that were carried out passed – but that might miss out the crucial detail of an entire feature set going untested due to time constraints.
Metrics have their place. But if they’re presented without any context, or without enough context, they can be misleading. Instead of relying on charts and graphs that barely scratch the surface, it’s important to dig deeper and provide actionable test reports that tell you what you really need to know – while still being easy to understand, even for non-technical audiences.
The problem with test case counts
Many test reports include test case counts – that is, the total number of individual test cases that have been written or carried out during the testing process. But this number doesn’t actually tell you anything important about your product.
You might have a test case count of 100. But what if the testers have focused on low-priority areas of your product, leaving the high-risk features untouched? The team may have focused their efforts on lower priority areas like error messages, rather than testing business critical features like checkout functionality, for example.
A large number of test cases doesn’t guarantee that your product has been comprehensively tested. If there have been multiple test cases for minor edge case scenarios, but none for critical workflows, it could lead to significant problems going unnoticed – but you’d never be able to tell that from a simple test case count.
The problem with pass percentages
Pass percentages are misleading in the same way. Let’s say you run a software testing report which shows that 95% of your tests have passed. That sounds like a great success – but it doesn’t reveal what’s passed and what’s not.
What if the 5% of tests that failed include critical functionality? Without context, these numbers can be misleading, making you think that a product is ready for release when it’s nowhere near that stage.
An overall test completion percentage (looking at the number of tests that have been run compared to those not yet run) can be useful for tracking progress. But pass/fail percentages need context to make it worthwhile to include them in your test reporting.
Why showing actual test coverage matters more than numbers
Test coverage is often reduced to metrics like the percentage of test cases executed, or the test case count. The number of tests carried out is important – but it’s not what really matters.
Understanding test coverage goes beyond surface-level statistics – it should offer meaningful insights about the scope and depth of testing, including:
- Feature coverage: Which features and functionalities have been tested – and how thoroughly?
- Risk coverage: Have the areas with the highest business impact impact been thoroughly tested?
- User workflow coverage: How well have critical user journeys been tested?
- Test types: What kinds of tests have been carried out (for example, unit testing, system testing or functional testing)?
- Untested areas: Are there features or functionalities that haven’t been tested? And why haven’t they been covered?
So far we’ve focused on how pies, charts, graphs and metrics don’t always give you the full picture of what’s not working. But it’s also important that you understand what has been tested and what is working. If a test report simply focuses on highlighting bugs and issues, it can leave stakeholders uncertain about whether the rest of the product works well – or whether it simply hasn’t been tested.
That’s why it’s important to clearly show what’s broken as well as what’s been proven to work well, to make sure that everyone has the full picture and can be confident in the testing that’s been carried out.
How exploratory testing complicates metrics further
Exploratory testing is a different type of testing to traditional test cases. While scripted tests are carefully planned, written and carried out, exploratory testing is an unscripted approach in which tests are carried out as testers go along.
They learn about the software, design tests and execute them without first having a plan. It can encourage creative and critical thinking, and can save time as no test cases need to be written in advance. It’s also a human-centric approach as testers explore the product in the same way that end users might do. Because the testing process evolves dynamically, exploratory testing can often uncover unexpected issues that scripted tests might miss.
However, it does make test reporting using metrics more complicated. How do you assign a number to the insights gained during an exploratory session?
That’s why, instead of trying to force exploratory testing into a traditional metrics framework, it’s better to focus on outcomes and actionable insights. That might mean creating reports that focus on outcomes rather than trying to quantify the results – which could, for example, look like a list of risks or gaps that have been discovered, categorised by the potential impact or risk level. Or you might create high-level summaries of each of the areas tested and their overall health, helping to identify areas that need further testing.
How we address these things in Testpad
So, what makes an effective test report? Well, rather than using lots of graphs, charts and statistics, a simpler approach is to simply show the full test plan, with clear visual indicators of what’s been tested, what hasn’t, what’s working, and what’s not.
In Testpad, our reports present a grid of checks and crosses against test prompts. This means users can quickly scan to find out what’s working (in green), while it’s also clear what’s not working (in red).
The report also includes all the comments and issue numbers that have been captured during testing. This gives readers the essential context behind the tests, as well as evidence for any problems found – meaning that all the details needed to understand and act on test results are in one place. There’s no need to cross-reference separate bug tracking systems, or chase testers for an explanation of what’s been done – it’s all right there for everyone to see.
Plus, the hierarchical structure of test prompts within Testpad makes it obvious how thoroughly each area has been tested, so you can clearly see whether there are any features that need further attention.
Make testing simpler – lose the graphs and charts
Graphs, charts and statistics might seem like the easiest way to present your test results – but don’t be fooled. All too often, they simply take our attention away from the details that matter the most.
Truly effective test reporting isn’t about colorful visuals or statistics that look good on paper. We’ll let you into a little secret: it’s actually about clarity and actionable insights.
Luckily, it’s easy to make your test reports clearer and easier to understand, for everyone. Tools like Testpad show reporting test results can be made straightforward, making sure that everyone (whether they’re technical or not) can really understand what’s going right, what’s not quite working, and what needs further investigation and testing.