EDITORIALS

How to Organize Your Exploratory Testing — Without Losing Its Benefits

 Image placeholder blog/articles/2025-03-14-organize-your-exploratory-testing/Organising-Exploratory-Testing.jpg

How to Organize Your Exploratory Testing — Without Losing Its Benefits

There’s a way to set up your exploratory testing process so that testers have just enough detail to test the things you want but not too much that they miss the hard-to-find bugs.

Testpad

By Testpad

March 14, 2025

Linkedin Logo Twitter Logo Facebook Logo
e

xploratory testing is the best way to expose unforeseen bugs. That’s because testers have the freedom to explore your product, just as an end user would.

The thing is, not all end users use your product the same way.

For example, to edit a post, some people might click an edit button. Others might click a pencil icon in the upper right corner or type an edit shortcut on their keyboard.

And if a tester only tests the edit button functionality — because that’s the approach to editing they personally would take if they were using the product — they could very likely overlook errors hidden in the editing pencil or shortcut.

That’s why exploratory testing needs to have at least a little bit of structure around it. Otherwise, you can’t be sure the important features and capabilities get tested.

At the same time, you can’t be too stringent with your instructions, or you defeat the purpose of exploratory testing.

While those may seem like two diametrically opposed requirements, it is possible to give enough leniency to your testers and feel confident about your testing coverage.

But before we get into the how, let’s briefly refresh your memory of exploratory testing.

What is exploratory testing?

Exploratory testing lets testers decide what to test next — based on their expertise, their experience in the space, and how their testing has already gone. At Testpad, we like to say exploratory testing is “making it up as you go along, with your brain engaged.”

The biggest benefit of this form of testing is that it harnesses human intuition and adaptability.

Good testers have a hunch of how a feature should behave. And if it doesn’t behave the way they expect, they also have a hunch of how to continue testing it in ways that expose even more issues that end users would encounter if the feature went live.

As we’ve alluded to, the problem with such an unrestrictive approach is that bugs can fall through the cracks. And in software testing, that’s not a good thing.

How exploratory testing stacks up against other approaches

Because of the coverage risks with exploratory testing, many commercial testing teams revert to fully scripted testing (aka, “Type A in this input field,” “Type B in that input field,” and so on).

But, as you might guess, there are problems with that, too.

To understand why, you have to know what constitutes well-managed testing. In our opinion, well-managed testing comes down to four elements:

  1. Planning what to test
  2. Tracking progress
  3. Reporting results clearly
  4. Reusing test knowledge in future cycles

With those elements in mind, here’s how common forms of testing (including exploratory testing) break down:

Type of testingProsCons
Purely ad hoc exploratory testingCatches more unknown unknowns than any other form of testingLacks all four management aspects
Fully scripted testingEasy to track and report on, ensures testing coverageBiases and knowledge are baked into the scripts, making it less likely for testers to find blind spots
Session-based test management (SBTM)Has good test planning through high-level chartersTracking, reporting, and reuse rely heavily on reading through test notes

As you’ll notice, none of them are perfect. But if you’re able to combine the good parts of each into a kind of middle ground, you’ll get the results you want.

That middle ground? It’s exploratory testing with some structure.

Best practices for structuring your exploratory testing

Here’s how to take a pragmatic approach to exploratory testing that (1) preserves its strengths and (2) makes it manageable for you and your testing team.

Write test plans as lists of brief test prompts

To make sure you’ve got enough testing coverage, start by mapping out the main features or capabilities of your product that you want exploratory testers to test. Underneath each one, compile a list of things to test related to that feature, including edge cases and special cases.

But — and this is a big but — do not go overboard with your list.

Each test should be as exploratory in nature as possible, so that means you need to keep your prompts super brief. Here are some examples:

Capability: Profile management

Prompts:

  • “Entering extremely long name/special characters”
  • “Updating profile picture with a large file”
  • “Autosave changes”

Capability: Authentication

Prompts:

  • “Unicode in username”
  • “Paste text with formatting into password”
  • “Rapid clicks on login during 2FA”

Capability: Payment processing

Prompts:

  • “Expired credit card during checkout”
  • “Browser autofill”
  • “Prepaid or virtual card”

Where to write your test prompts

You can jot these prompts down in a spreadsheet or a tool that’s purpose-built for exploratory testing, like Testpad. Whatever you use:

  1. Make it easy for testers to record their results. The less intuitive your process is, the less likely testers will follow it to a tee, and the less accurate your results will be. Having a test prompt in each row and asking testers to mark pass or fail in the column next to it should be sufficient (more on that next).
  2. Make it easy to copy/paste. Chances are, you’ll have to use a lot of these test prompts again, so they should be easy to reuse and adjust over time.

Helpful hints

  • You don’t need to say “check” or “verify” in your prompts. It’s obvious to your testers that that’s what they should be doing.
  • You also don’t need to say “test all fields.” That can get too onerous and take testers down an unproductive rabbit hole.
  • Filter all of your prompts like Goldilocks. Prompts shouldn’t be too detailed, such that they sound scripted. Prompts shouldn’t be too vague or testers won’t know what to do. They should be just the right balance of the two.

Record results beside your test prompts

Doing so helps you understand exactly what was tested and whether it passed or failed. In your spreadsheet or in a tool like Testpad, we recommend having multiple columns that list:

  • Who was testing, as each person tests in different ways.
  • What environment they were testing in, because the results may differ.
  • What browser (Edge, Safari, Chrome) or device (desktop, iPad, iPhone) testers used — again, those could have varying results.

We also recommend hard-coding “pass” and “fail” values in the cells next to each prompt so recording results is easy and quick for your testers. In the example below, pass is represented as a green check mark, and fail is represented by a red X:

Desktop-testing

Helpful hints

  • Sometimes, testers may not be able to test a specific prompt or may have questions about how or what to test, so you may want to add other picklist values to your results columns, like “blocked” or “query.”
  • Testers may also have important notes to share about certain prompts or will be logging tickets related to failed tests, so give them a way to leave optional comments or issue tracker references.

Use your results as a visual indicator of progress

If testers log results next to test prompts, it should be immediately apparent:

  • What’s planned for testing
  • What’s been done (and what’s left to test)
  • What’s working (pass results)
  • What’s not working (failed results)

And you can slice and dice that information as needed before sharing in stand-ups or with higher-ups or clients. Here’s an example of such a report in Testpad:

big-screen-report

From this report, we can quickly see:

  • How many tests passed (57)
  • How many failed (12)
  • How many were blocked (1)
  • How many have an associated question (1)
  • Who performed each test (Anna, James, and a guest tester)
  • Which browser each test was performed in (Safari, Chrome, Edge)

And even though we don’t know exactly what product this is, we can see from the report that there’s some sort of issue with how login and password functionality is working in Edge. This can inform our future testing and serve as a starting point for regression tests or even a round of testing on a different but similar part of the app.

Add new test ideas during testing

The best test ideas don’t always come up in planning. Most emerge as testers move through your product and naturally think of variations of your prompt or encounter unexpected behaviors worth investigating further.

Enabling your testers to add their own prompts (by inserting additional rows, for example) is a great way to make sure those ideas don’t get lost — you want them as a reference when that area of the product is tested again.

Over time, this creates a continuously evolving, more effective testing process without the overhead of rigid test case management.

A tool designed specifically to support exploratory testing

While you can design a perfectly good exploratory testing strategy in spreadsheets, they can be tough to manage over time. Even something as simple as not knowing which version is the latest one can throw off your reporting and future testing plans.

Being able to plan, track, adjust, and reuse your prompts — all in one centralized platform — can help you stay organized and adhere to exploratory testing best practices. We’ll admit we’re a little biased, but we think Testpad is a great place to manage your exploratory testing because it has:

  • An outline-based testing structure for test prompt planning
  • Built-in optional commentary and bug tracking fields for tracking results
  • A keyboard-driven editor to type hundreds of test prompts quickly
  • The ability to add free-form test plans as testers come up with new ideas
  • Audit trails to show everything that’s been tested
  • Reports that can be shared via email, link, or printed file
  • Testing tags to selectively include or exclude certain prompts in future testing

Sound like a good fit? Try Testpad out to see whether it’s the right solution for you.

Sign up to get free access for 30 days.

Green square with white check

If you liked this article, consider sharing

Linkedin Logo Twitter Logo Facebook Logo

Subscribe to receive pragmatic strategies and starter templates straight to your inbox

no spams. unsubscribe anytime.