EDITORIALS

Sanity testing: why you shouldn’t skip it and how to do it right

A foot testing the water temperature, used here as a metaphor for sanity testing in software

Sanity testing: why you shouldn’t skip it and how to do it right

Sanity testing doesn't have to be overwhelming – a quick check to make sure the basics are working before diving into more complex testing can give you confidence in the state of your product. It can also save you time and money when done right.

Pheobe

By Pheobe

March 31, 2026

Linkedin Logo Twitter Logo Facebook Logo
s

anity testing doesn't have to be complicated. It's a quick check to make sure the obvious stuff works before you dive into the deep end of testing. Think about it. Your team's just finished a new build. Before you unleash the full might of your testing process, wouldn't it make sense to spend a few minutes making sure the basics still work? That's all sanity testing is. It's a simple, pragmatic way to catch glaring issues before you waste time on more detailed tests.

What is sanity testing?

Sanity testing is a quick check that your software's basic functions are working before you commit to a full test run. It's not about running all your tests or diving into extensive regression testing – it's about efficiently assessing whether it's worth proceeding at all.

Think of it as a quick temperature check for your software. Before you call in the full medical team (your comprehensive testing process), you want to make sure there's actually a fever to treat. This approach is all about being time-efficient, catching obvious issues with minimal effort before you invest more resources.

Consider this scenario: Your development team has been working on a new feature for your web app. Before you unleash your QA team for a week-long testing marathon, wouldn't it make sense to spend 15 minutes checking if the basics still work? Can users log in? Does the main page load? Does the new feature do anything at all? That's sanity testing in action.

The goal is to go wide before you go deep. You're looking for obvious issues that might derail more extensive testing efforts. It's far better to discover a major problem in a 15-minute sanity test than five days into a comprehensive test run.

Remember, sanity testing helps by being pragmatic. It's not about perfection – it's about quickly gathering the information you need to make smart choices about your testing process. If you're not doing some sanity testing already, and your full test run takes non-trivial effort, then it'd be a good idea to start with some simple quick checks.

For example, imagine a CSS deployment breaks the login page. A two-minute sanity check catches it before your QA team starts a week-long test cycle – saving days of wasted effort.

Sanity testing vs. smoke testing

Sanity testing and smoke testing mean the same thing in most teams: a quick check of basic functionality before a full test run. Where teams do make a distinction, sanity testing tends to focus on specific areas that have recently changed, while smoke testing covers a broader range of core functionality. Either way, the core concept is the same.

Sanity testing vs. regression testing

A sanity check should not be confused with regression testing – the two are quite different. Sanity testing is a quick pre-cycle check; regression testing is a thorough process to ensure previously fixed issues haven't reappeared.

As we've mentioned, sanity testing is a quick check that all the obvious stuff is working before committing to a longer, more expensive full test. It's about quickly assessing the current state of the software, particularly after recent changes or bug fixes.

Regression testing, on the other hand, is the rigorous testing to ensure that everything that has been broken and fixed in the past isn't broken again this time. It's about ensuring that existing functionality hasn't been compromised by new changes.

The automated part of your regression tests will probably have already been run before you're considering a sanity test, as they're typically quick and easy to run once written. But the manual part of your regression testing (and yes, regression testing can be manual) will be part of the very "full test run" that you want to protect with a quick sanity check first.

In some cases, sanity testing might overlap with a subset of regression testing. For instance, if automated regression tests flag several issues, a quick sanity test might be used to verify these problems and assess their impact before diving into more detailed testing.

Remember, the goal isn't to rigidly categorize every test you perform. The key is to use these different testing approaches pragmatically, choosing the right tool for the job at hand.

Whether you're performing sanity testing or running a full regression suite, the ultimate aim is the same: to gather valuable information about your software's current state and make informed decisions about your development process.

Sanity TestingSmoke TestingRegression Testing
ScopeNarrow – specific changed areasBroad – core functionalitiesComprehensive – all previously tested areas
WhenAfter a specific fix or changeAfter a new buildBefore a release
SpeedVery quick (minutes)Quick (minutes–hour)Slow (hours–days)
GoalIs this fix working?Is the build stable?Has anything broken?

The importance of sanity testing

Sanity check is all about efficiency in the software testing process. It recognizes that testing happens within the constraints of limited resources, and we're always trying to find as many issues as possible with the least effort.

The importance of sanity testing lies in its ability to catch obvious problems quickly before investing time in more thorough testing phases. It's a smart way to allocate your resources and potentially save significant time and effort down the line.

However, the scope of sanity testing should be proportional to your overall testing process:

  • If your full test run only takes an hour, you might not need a separate sanity check.

  • If your full test run takes several hours, a quick 20-minute sanity test might make sense.

  • For test cycles that take days or weeks, a more extensive "quick" test lasting a day or two could be beneficial.

The key is to keep things in perspective and tailor your approach to your specific needs. Sanity testing is about making sure it's worth proceeding with more comprehensive testing, not adding unnecessary complexity to your process.

If this kind of check fails, it often indicates a fundamental issue that needs addressing before further testing. This early detection can save your team valuable time and resources, ensuring you're not investing effort in testing a build that's fundamentally flawed.

Ideas for applying sanity testing

Sanity testing can be carried out by anyone about to start a test cycle – a QA engineer, a developer, or anyone responsible for signing off a build.

When it comes to applying sanity testing, remember this: IT DEPENDS. Your specific project, team structure, and development process will all influence how you implement your sanity testing.

You perform sanity testing any time you're about to embark on a longer test cycle.

This could be after receiving a new build, following significant code changes, or when you need quick feedback on the software's state. The key is to use sanity testing as a quick check before investing time and resources into more comprehensive testing.

Practical steps:

  • Keep it simple: Don't overcomplicate it, just work out a small collection of simple checks that test if the main features are working.

  • Use checklists: Checklists are an obvious and simple way to document your sanity tests. They help ensure you don't forget anything, and give you consistency from release to release. A tool like Testpad makes this easy, letting you manage sanity tests as simple checklists without the overhead of traditional test management tools.

  • Focus on what's new or changed: Yes, include quick checks for all the main features of your software, but it can often be worth a few extra quick checks on what's new or changed since last time (as that's the most likely areas for new and obvious bugs)

  • Consider a subset of regression tests: If you have an established set of regression tests, consider using a small subset for your sanity testing.

  • Incorporate exploratory testing: Don't be afraid to go off-script. Sometimes, a few minutes of freeform exploration can uncover issues a scripted test might miss.

Sanity testing really doesn't have to be complicated. Your checklist could look as simple as:

  • Can customers log in?
  • Does the product catalog load?
  • Can a product be added to the cart?
  • Does the checkout process work?
  • Is the new "Recommended Products" feature visible?

The exact contents of your sanity test will depend on your specific application. Focus on the core functionalities and any recent changes or additions.

By applying sanity testing, you quickly assess whether your software is ready for more thorough testing. This simple approach can save your team valuable time and resources, catching major issues before you invest in more comprehensive testing.

Sanity testing tools

Choosing tools for sanity testing isn't hard. In fact, you don't need complicated tools at all.

A pen and paper or whiteboard can work, but they're easy to lose or accidentally erase. Many teams find starting with a list in a spreadsheet is the best approach. It's simple, shareable, and gets the job done.

When spreadsheets start becoming unwieldy due to formatting issues or growing complexity, you might consider upgrading to Testpad – an easy to use test management tool built around checklists. It offers more power than a spreadsheet without the complexity of traditional test management tools.

Remember, it's about quick, efficient checks. Avoid over-engineering your process with complex systems. The key is to keep it simple and choose a tool that aligns with your team's needs and workflow.

As for automation tools like Selenium, Cypress, or LoadRunner – these are primarily for regression testing, not sanity testing. While they're valuable in their own right, they're typically overkill for the quick, manual checks that characterize sanity testing.

The bottom line? Start simple. Use what works for your team. And only add complexity when it truly adds value to your sanity testing process.

How to run sanity tests in Testpad

Testpad is designed around checklists, which makes it a natural fit for sanity testing. There's no complicated setup, no test frameworks to configure. Just a list of checks you can run through quickly before committing to a full test cycle.

Here's how to get started:

  1. Create a new script for your sanity tests. Keep it short. Just the core checks that tell you if the build is worth testing further.
  2. Add your checks as simple lines. Things like "Can users log in?", "Does the main page load?", "Does the new feature appear?" No need for formal test case structure.
  3. Run it before every test cycle. Work through the checklist, marking each item pass or fail as you go.
  4. If something fails, stop. Flag the issue to the dev team and save your full test run for when the basics are working.

A sanity test passes when all your core checks complete without issues and the build feels stable enough to test further. It fails when any critical check breaks – a page won't load, a key feature is missing, or something fundamental isn't working. You don't need a perfect score. You just need enough confidence to proceed.

That's it. The whole point of sanity testing is speed and simplicity, and Testpad keeps it that way.

Conclusion

The best sanity tests are the ones that actually get done. Keep yours simple, focused, and repeatable. A short checklist is all you need to run before every major test cycle and avoid wasting days on a build that was never ready for testing.

If you're looking for a better way to manage your sanity tests, Testpad is built for exactly this. Lightweight checklists that are quick to run and easy to share. Start your 30-day free trial and see how it fits your team.

Frequently asked questions

What is the difference between sanity testing and smoke testing?

The terms are often used interchangeably, and in most cases they mean the same thing. Some teams make a distinction: sanity testing focuses on specific areas that have recently changed, while smoke testing covers a broader range of core functionality. The key concept is the same either way. Run a quick check before committing to a full test cycle.

When should you do sanity testing?

Any time you are about to start a longer test cycle. That could be after receiving a new build, following a bug fix, or after significant code changes. If a full test run takes more than an hour, a quick sanity check beforehand is worth it.

How long should a sanity test take?

It depends on the size of your test cycle, but as a rule of thumb: if your full test run takes a few hours, your sanity test should take 15 to 20 minutes. If your full test run takes days, a sanity check of a few hours is reasonable. The goal is to catch obvious issues quickly, not to run a mini version of your full test.

Does sanity testing need to be automated?

Not necessarily. Sanity testing is often manual, especially when it is a quick check before a test cycle. That said, if you already have automated tests, running a small subset of them as a sanity check is a perfectly valid approach. The goal is speed and simplicity, so use whatever gets you a quick answer fastest.

Green square with white check

If you liked this article, consider sharing

Linkedin Logo Twitter Logo Facebook Logo

Subscribe to receive pragmatic strategies and starter templates straight to your inbox

no spams. unsubscribe anytime.