
EDITORIALS
What is test management?
The term "test management" sounds technical, maybe even a bit formal. But really, it just means organising how you test software. And the good thing is, you’re probably already doing it.

Manual testing is where human judgment meets software quality. This guide covers the types that matter, when to use each, how to plan sessions, and the tools that keep it all organized.
ll manual testing means is a human interacts directly with software to check it works. You won’t need any scripts or automated frameworks – just a tester, the product, and a list of things to check. It sounds simple, and it is. That's the point!
Manual testing is what most teams already do, often without calling it anything formal. If you've ever clicked through a new feature to see if it behaves as expected, you've done manual testing. The challenge isn't learning what it is. It's doing it consistently, efficiently, and in a way that really tells you something useful about your product's state.
This guide covers the full picture: the types of manual testing, when to use each one, how to plan and structure your sessions, and what tools make it easier. For a step-by-step intro to getting started, see our blog on how to get started with manual testing.
Most of the testing your team does probably falls into one of three categories. Understanding the difference helps you use each one at the right time.
Exploratory testing is the most valuable kind of manual testing. Testers interact with the product freely, inventing test ideas as they go, reacting to what they find, and following interesting leads. There's no script telling you exactly what to click – just a set of areas to investigate and a tester using their judgment to poke around.
This is where the bugs that scripted tests miss tend to show up. Automated tools only catch what they're programmed to look for. A human tester will spot the thing that just feels wrong, or notice that two features interact in a way no one considered.
Read more: A pragmatic guide to exploratory testing
Regression testing checks that things which worked before still work now. Every time your team fixes a bug or ships a feature, there's a risk that something else quietly breaks. Regression tests are your safety net.
The common assumption is that regression testing has to be automated. It doesn't. Manual regression testing is perfectly viable – especially for checks that are hard to automate, or for teams who haven't yet built automation infrastructure. In Testpad, you can start with a handful of prompts and add to the list every time you fix a bug. It builds itself over time.
Read more: The benefits of manual regression testing
UAT verifies that software does what it was supposed to do for the people it was built for. In client projects, that usually means the client checking the product before signing off. For product teams, it means testing against real user scenarios.
UAT doesn't require an elaborate setup. It requires clear test scenarios and a straightforward way for testers or clients to record what they find. Testpad's guest access lets clients log results and sign off without needing an account – you just give them a link.
Read more: UAT
Before investing time in detailed testing, it's worth checking that the basics work. Can you log in? Does the main page load? If the answer is no, there's no point running anything else yet.
Smoke testing and sanity testing mean essentially the same thing: a quick sense-check before going deeper. A short checklist of critical paths, run in a few minutes at the start of each session, is enough.
The short answer: more often than most teams expect.
Automation is good at repetitive checks. It can run the same test a thousand times and never get tired. But it can only test what it's been explicitly told to look for. It has no intuition, no curiosity, and no ability to notice that something feels off even if it technically passes.
Manual testing is the better choice when:
The two approaches aren't really in competition, and often work well together for many teams.
A test plan for manual testing is essentially a list of things to check. The goal is to capture enough detail that testers know what areas to cover, without prescribing exactly how to test each one.
The most practical format is a checklist of test prompts: short, specific ideas that direct a tester's attention without turning testing into a box-ticking exercise. "Check password complexity rules" is a better test prompt than either "test login" (too vague) or a five-step script with expected outputs (too rigid).
Good test plans:
Read more: How to write your first test plan
Test cases are detailed: steps, expected outcomes, preconditions. Checklists are lightweight: a list of things to investigate. Neither is universally better.
Use test cases when you need a detailed audit trail or are testing something with precise compliance requirements. Use checklists when you're moving fast, testing exploratively, or working with testers who know the product well enough not to need hand-holding.
Most teams default to test cases out of habit rather than necessity. For the majority of manual testing, a well-written checklist covers more ground in less time.
Testing without records isn't testing – it's hoping. Even a simple pass/fail note against each test prompt tells you something useful: what was checked, what worked, and what didn't.
Good manual test reporting doesn't need a lengthy summary document after the fact. A live view of results – updated as testers work through their prompts – is more useful than any post-mortem report. Stakeholders can check progress at any point. Issues surface immediately rather than being compiled days later.
What a test report needs to show:
That's it. The rest is optional. Testpad's reports show exactly this – a pass/fail grid against every prompt, with comments and issue numbers captured inline, shareable with stakeholders via a live link.
Read more: Simple test reporting that actually works
Manual testing doesn't really need much. A way to record what you're going to test, a way to capture results, and a way to share findings with the rest of the team.
Most teams start in spreadsheets. They work for simple setups, but they get unwieldy as test plans grow and teams need to track results across multiple releases or testers. Managing multiple test runs, filtering by environment, and sharing live progress all require workarounds that waste time.
The traditional alternative is heavyweight test case management tools – formal, database-centric, and designed for teams with dedicated QA departments. For many teams, they're more process than necessary.
Tools like Testpad sit in the middle: a checklist-based approach that maps directly to how manual testing actually works. You have a list of things to check, you work through them, you mark pass or fail, and you capture any issues as you go. Anyone on the team can join as a guest tester without needing an account.
Key things to look for in a manual testing tool:
Read more: Testpad features
Manual testing works best when it becomes a habit rather than a last-minute scramble. Start with the things most likely to break, build up your test prompts over time, and share results as you go.
If you want a tool that makes all of that easier without adding process for the sake of it, try Testpad free for 30 days.

EDITORIALS
The term "test management" sounds technical, maybe even a bit formal. But really, it just means organising how you test software. And the good thing is, you’re probably already doing it.

EDITORIALS
There are many types of software testing – exploratory, regression, system, integration, UAT, performance, security – all useful, and all with their own distinct place in a test strategy.

EDITORIALS | EXPLORATORY TESTING
Exploratory testing is essential to uncovering hidden issues and improving the quality of real-world usage. If you’ve ever found yourself wondering what exploratory testing is, you’re not alone — and you’re in the right place.