EDITORIALS

Why exploratory testing should be part of your QA strategy

Exploratory testing… the best kind of (manual) testing

Why exploratory testing should be part of your QA strategy

Exploratory testing is essential to uncovering hidden issues and improving the quality of real-world usage. Adapting a pragmatic approach enhances efficiency and repeatability without relying on automation.

Stef

By Stef

May 16, 2024

Linkedin Logo Twitter Logo Facebook Logo
e

xploratory Testing is often hailed as one of the most effective testing methodologies, offering unique benefits that structured approaches struggle to match. However, so many teams give it a miss, instead preferring the (supposed) speed and repeatability of automation and the audit trail of fully-scripted test case management.

Which is a shame, as the value of Exploratory Testing is undeniable. It fosters a deep understanding of the application under test, encourages creative problem-solving, and adapts to the evolving context of the project.

A big part of the problem is not knowing how to implement Exploratory Testing in real-world projects; projects that have managers who want to know answers to very unreasonable questions like “how much have you tested so far?”, or the outrageous “how much longer do you need?”. While there is some formalizations to Exploratory Testing, in our experience they’re not widely used. So we present instead a simplified approach: keeping the benefits of human-driven testing but with some pragmatism thrown in to make it predictable, trackable and re-usable the next time.

What is exploratory testing?

Before we get into how to better approach exploratory testing, let’s first look at what it is.

The International Software Testing Qualifications Board (ISTQB) defines exploratory testing as:

“An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item, and the results of previous tests.”

Er…ok. Let’s try Wikipedia, which goes for the rather more concise:

“Simultaneous learning, test design, and test execution”

Better, but still not great. What exploratory testing all boils down to, really, is:

“Making it up as you go along - with your brain engaged.”

This allows testers to choose what they do next based on what's just happened, to be reactive to what’s happening, to try test ideas inspired by their interaction with the product.

Exploratory testing is making it up as you go along, with your brain engaged.

To control or not to control?

To understand where exploratory testing fits in, it helps to consider the degree of control your testers have over their actions during testing.

Zero control is testing that is ad-hoc, purely freestyle, basically just using the software and trying to break it. On the other hand, full control is when every action and every expected outcome is spelled out in exacting detail.

Exploratory testing definitely includes zero control, purely freestyle testing. Exploratory testing definitely does not include fully scripted, fully controlled testing.

So, how much in between these extremes counts as exploratory testing?

Well, it doesn’t really matter!

You can rightly argue that all testing is exploratory because it’s impossible to control every micro-action of a tester, removing all agency and independent thought. But that’s getting away from the point. If you’re leaving your testers free, let’s say somewhat free, then it’s exploratory.

If you’re controlling your testers to the point where anyone could do it, it’s not exploratory. In fact, at that point, you absolutely should be automating it (assuming it’s cost-effective to implement)

The advantages of exploratory testing (a.k.a. not controlling your testers)

It’s easiest to see the value of exploratory testing when you contrast it with fully scripted testing at the other end of the spectrum.

Fully scripted testing, whether carried out by humans or automated, can only find problems on the path the tester is taken down, the problems the test authors anticipated. Your known unknowns, if you like.

But if you want confidence you’ve found all the problems there are to find, you need to predict a lot of problems, which equates to a massive collection of test instructions. It’s like trying to color in a drawing with a thin pen. You need a lot of lines scratching back and forth before it begins to look at all filled in.

If you think this sounds sub-optimal, you’d be right.

What you need is an approach that makes it more likely you’ll find problems off the beaten track, more likely to uncover your unknown unknown’s. And that’s exactly where exploratory testing comes in.

Exploratory testing is more likely to find your unknown unknowns.

Exploratory-style testing achieves this precisely because it doesn’t try to define every step of the way. Instead, it leaves your testers free to use their intuition. To try out your product in the way real users might in the real world. To explore patterns of use and user behaviors the developers never even thought of. Even if they were embarrassingly obvious in hindsight. Doh!

In other words, exploratory testing is like doing that color fill with a marker pen.

Without getting qualitative, consider the likelihood of types of problems. Exploratory testing allows for (good) testers to dose their test effort looking for the more probable new bugs, than the less probable reoccurrence of old bugs. That’s not to say testing shouldn’t include checking the whole system, but testers will rightly intuit the new stuff that’s going to have the new problems and will invest more time and be more inventive in their tests of those features.

There’s also a human win… because your testers have a degree of agency and self-direction (along with active brains making active choices) they will inevitably be more engaged with the whole process. And who doesn’t want a more engaged team?

Side note: We should perhaps allow that there are specific situations where fully-scripted testing is in fact useful or even required. Regression testing would be a good example – building a growing collection of very specific (preferably automated) tests to ensure you don't break the same thing twice. Or in more regulated environments, controlling and recording exactly what was tested can be a matter of standards conformance – even if this means less effective testing in terms of maximizing how you learn about the state (readiness) of your product.

The disadvantages of exploratory testing

Exploratory testing is valuable and it’s simple to do. So why isn’t everyone doing it? Basically, it’s too simple.

Exploratory testing is often performed with no structure at all, the zero-control purely freestyle approach discussed above. Just pick it up and get testing. And yes, this approach is better than no testing! It will certainly find a lot of bugs. But when your manager walks over, it’s hard to answer questions like “Are you done yet?”

This obviously doesn’t fly for real-world teams with little room or patience for unplanned activities that can’t be tracked or reported on.

Of course, managing exploratory testing isn’t a new problem, and formalisations exist to address the problems with controlling the uncontrolled.

Cue SBTM…

The most well-known is probably Session-Based Test Management (SBTM). SBTM codifies some control on what would otherwise be a freeform process with the ideas of breaking testing up into time-limited sessions, with guides (charters) to define scope. Testers use the instruction to journal and take notes of the testing activity as it proceeds.

If we extend the flippant definition of exploratory testing from above, it becomes:

“For each of the following topics, spend X minutes making it up as you go along - with your brain engaged… and taking notes.”

Thinking back to the degree of control idea, SBTM comes in fairly low. There’s a charter with high-level items in it to guide the process, but within those items and within the time limits, testers can be very exploratory and are free to put their efforts where they see fit.

Thus, SBTM begins to help with at least the planning and tracking part. The charter (and time bounds) is a reasonable plan. And the notes taken against that plan certainly count as tracking.

However, this is often not enough from a project management point of view.

The notes that count as tracking also form the basis of reports on how testing went. The quality of such reports therefore depends heavily on the ability of the tester's note-taking skills. It’s quite tricky to be both complete and keep a lid on the verbosity.

Testers tend to either go all-in and make boring, long-winded records of every test idea performed, or they miss important information in an effort to economize (or through natural human laziness!)

These reports also only work as reports when they’re read through thoroughly. Without additional effort to paraphrase and condense the essential content, there’s no quick summary available as to what’s working and what’s not.

SBTM isn’t ideal for the testing team either. Teams like to evolve their process through time, learning from release to release, from sprint to sprint. With notes as the chief output, the next test cycle can only improve on the previous one if a tester re-reads all the notes from last time or if there’s a separate process to re-read notes and summarize key findings to be aware of next time.

The bottom line on SBTM is it’s a definite step up in making exploratory testing more useful from a management point of view. But still, no cigar when it comes to at-a-glance reporting and simple reuse for next time.

A more pragmatic approach to exploratory testing

So, how do you balance the value of exploratory testing against the reasonable needs of project management?

It’s actually not that hard. You just have to let go of the idea of leaving the tester completely free. Put some structure around the process but stop short of being too rigid in your control. Essentially, be more pragmatic about it.

Rather than leaving testers completely to their own devices, construct test plans that have significantly more detail in them than SBTM’s high-level charters. Aim for test plans that go into sufficient detail such that a “pass” is enough information to indicate a test idea is working.

At the same time, you need to make sure plans don’t have so much detail that testers have zero freedom. Testers need to be free to come up with their own ways to investigate the correctness of each test idea. Treat “tests” not as steps and expected outcomes but as ideas; ideas for experiments that the tester should explore.

To build this kind of test plan, you might wonder where to start. But, again, it’s not so hard. A good place to start is a simple list of the high-level features, user stories, backlog items, or just the broad capabilities.

From there, embellish each item with further aspects, sub-features if you like, that you want testers to look into. And then iterate again, embellishing at more and more detail until you’re happy with the coverage but without spelling it all out in so much detail that you might as well automate it.

There’s no doubt this loses out to the idealistic value of a tester deciding on-the-fly where to invest the most time. On the other hand, because no instructions have been given about how or how to test within each item, it still benefits from the investigative intuition of the brain-powered tester.

So, like with SBTM, we have a decent plan of what's to do, of how big the test task ahead is. But now we also have a really nice and simple way of tracking progress

As testing progresses, testers collect simple pass/fail (with optional comments) against each test idea. The output is a simple yet comprehensive list of items that have been looked at and whether or not problems were found.

Scanning a list of tests with pass/fail results makes it really obvious how much has been tested, and any problems will draw the eye — especially if you color them in red!

Again, repetition for emphasis: don’t miss that this style of reporting is great for showing what’s working as well as what’s not. So often in testing, you only hear about the problems, via bug reports in an issue tracker, which leaves you wondering if that meant there weren’t any problems anywhere else, or whether everywhere else was actually tested

Good test reports tell you not just what wasn’t working but what was working.

Good test reports tell you not just what wasn’t working but what was working. Otherwise, who’s to say if no problems were found or no problems were even looked for.

This kind of test plan can be iterated easily. If more test ideas come up, you can just add them to the list. It even works during testing. After all, it’s as you’re testing that you’re most likely to have new ideas for new tests. Which, of course, is why exploratory testing is so useful in the first place, think back to “continuous simultaneous test design and test execution.”

But with the pragmatic approach, you’re putting those new test ideas down on paper so they can contribute next time — not just an idea for this time.

Pragmatic exploratory testing tools

Another benefit to the pragmatic strategy is that it doesn’t take a lot to get started. Testers just need a place to write down their results, observations, and new test ideas.

Of course, some tools are better than others.

Post-its will do the job, but they’re easy to lose and don’t give the most professional impression.

Whiteboards are harder to lose but difficult to share. And if someone walks too close to the board, you’ll lose months of work on their shirt…

Word processors are good for building and iterating tests over time, showing what has been tested and what has yet to be tested.

Google Sheets or Excel are a bit cleaner, with columns to show exactly what was tested, what passed, and what failed. They also make it easier to total up the number of tests and their success. But they’re problematic when you want to create an outline or hierarchy, and it’s easy to pour several hours into making them look presentable.

Then there is Testpad…(with, ahem, obvious bias)... we think it is the best tool for a pragmatic approach to exploratory testing.

  • It’s easy to use. Testpad has the same simple listing facility that documents or spreadsheets do. Anyone who’s used Google Sheets or Excel gets the hang of it in minutes.
  • It’s easy to add hierarchical information. Unlike spreadsheets, Testpad lets you outline tests and group them logically, making it much easier to appreciate how comprehensive a test cycle is. Instead of reading every cell in a worksheet, you can just scan a collection of headings and subheadings.
  • It’s easy to edit and add new tests.Testers don’t need to right click and ‘insert new row’ - they can just type in new tests as they go, being as verbose or concise as required.
  • It’s easy to report. Testpad uses a spreadsheet-inspired grid of results, with checks and crosses that make it easy to see where testing stands.The reports work as both live progress trackers during testing and elegant final reports when testing is complete.
  • It’s easy to collaborate. Assign test runs to co-workers, invite guest testers for extra help - it's all cloud-based making it easy to share plans and progress anytime anywhere.

Wrapping it up

So... Exploratory testing brings a lot to the table. It’s an approach that leverages human creativity and intuition, uncovering issues that automated or fully-scripted testing can miss.
While it may seem too unstructured at first glance, adding a layer of pragmatism can turn it into a highly effective process: balancing flexibility with a bit of structure helps in making exploratory testing trackable and repeatable.

To truly get the most out of this approach, having the right tools can make a significant difference. A tool like Testpad can streamline the process, making it easier to organize, execute, and report on your testing. With Testpad, you can maintain the creativity and flexibility of exploratory testing while also meeting the practical demands of project management.

If you’re ready to see how Testpad can support your testing efforts, visit our website to learn more or sign up for a 30-day free trial. Happy testing!

Green square with white check

If you liked this article, consider sharing

Linkedin Logo Twitter Logo Facebook Logo

Subscribe to receive pragmatic strategies and starter templates straight to your inbox

no spams. unsubscribe anytime.