Is automated testing always the best way to test your software? The cost-benefit doesn't always stack up, and manual testing can be cheaper and faster than you think.
utomated testing has quickly become the go-to mantra for many software teams, hailed as the holy grail of efficiency and reliability. While its advantages are undeniable, the question arises: Should every testing scenario be automated? In this piece, we’ll explore whether the ROI of automation deserves its reputation, covering its pros and cons in the context of the type of testing you need to do and the resources you have.
If your software team is guilty of always reaching for automated testing without considering the possibility that manual testing might actually be cheaper or more appropriate — keep reading.
Automated testing is when tests are performed by software instead of manually by humans; a way for developers to instantly validate the functionality of their code, using code, rather than doing it by hand.
Using special frameworks and software, developers can create and run thousands of tests while they sit back and sip their coffee. And because testing would otherwise be done manually, automated tests save developers significant time once they are written, which they can allocate to bug fixes, enhancements, and new development.
An ideal scenario for automated testing would be:
Automated testing can be considered in most engineering projects, from website frontends and backends to mobile apps, to embedded software, to everything in between. For instance, you can automate:
As these examples suggest, automation can be useful for your development team, helping them iterate on their projects and deploy changes quickly (aka the key to keeping internal and external customers happy).
Though this article is about why we shouldn't always be automating testing, we must acknowledge that automated testing has lots of benefits. Namely:
Many companies don't hire testers. They always have developers. And so they try to get their developers to write the tests, and developers are ONLY ever going to want to automate a test so they don't have to do it again.
But in certain scenarios, automation may be more expensive than you realize, or indeed, not even the best way to prove your software is what you hoped it was. You have to weigh the pros against these significant cons:
You have to write automated tests first if you want to run them, and that requires the skills of engineers who can code. These folks don’t come cheap.
Even if you hire them on a contract basis, the hours will rack up quickly — it takes time for them to design and write functioning code. And the more code you write, the more bugs you’ll get, compounding this issue.
Some applications have so much complexity, unpredictability, and variability that they are almost impossible to test in a standardized manner.
Take mobile apps, GUIs, or video games, for example. If it's not straightforward how to write automated tests for a product, it'll take longer to write them, and you'll probably have to settle for fewer tests.
Automated tests can be far too defined and prescriptive to reach all, or even just most of, the functionality and myriad special conditions. The only bugs you’ll find are the ones you predicted might exist and therefore tested for — the "known unknowns". You won't find the unforeseen issues — the "unknown unknowns" — testers might find in more exploratory testing – more free-form testing, without prescriptive scripts, exploring for possible problems.
In many cases, you need the kind of real-time human interaction you can only get from manual testing. They test for all the undocumented aspects, things that are obviously wrong to any person trying to use the software but that no one may think to write an automated test for. Some examples are:
Humans can conceive of test ideas and execute them there and then, which is faster than it would take to automate a test for every single conceivable function of the product. With much lower friction, more of the app gets tested, and more hidden defects and vulnerabilities get uncovered — ones that automated tests may miss.
Which brings us to…
If developers writing your automated tests have the wrong assumptions about how a product is supposed to work, they may write perfect tests for exactly the wrong thing. Subsequent “pass” test results will only mean that the wrong thing works — not that your app or software actually works.
The first step to ensuring you test the correct things is to swap devs on each others’ code. They may find discrepancies or ask questions that could reveal a significant misunderstanding.
But even then, engineers may not always have the same perspective or context in mind as non-engineer end-users and end up approving automated tests that don’t reflect the main goals or value of the software.
A key aspect of exploratory testing is that testers can invent things to try on the fly. They can do this as ideas occur to them based on what they’ve just observed. In other words, exploratory testers can hunt around for what might be broken.
Automation can't do this if it takes too long to code up each test. A human doing manual tests (especially in an exploratory setting) can hammer through ideas nearly as fast as they can think of them.
It takes significant time and effort to write and process automated scripts. You have to write code for every single aspect of the product, and it’s easy to get bored and stop early. A sloppy or hastily written set of automated tests may only sample a handful of features, leaving room for undetected bugs to slip in.
Someone can create or add to tests much faster by hand, and exploratory testing is often the only way to find new problems that weren't predicted in advance. Building manual test plans as you go allows you to capitalize on a time when you’re most likely to have new ideas for new tests and test them right away.
Automation can give you speed and consistency, especially at the unit level. But some products simply aren’t conducive to automated testing at all.
For instance, visual testing, usability testing, and security testing can’t be limited to hard-coded specs. There are way too many implied requirements based on observation and analysis that automation software cannot test.
Video games are another example. It’s extremely difficult to forecast how a player will interact with their environment and other players, let alone use the tools at their disposal. The sheer number of variables and time dependencies make it impossible to reasonably build sufficient automated software to test everything.
Even products that seem well-matched for automated testing at first glance really aren’t when you dive deeper. While website testing can be automated, code maintenance becomes a problem. Whenever you make a significant change, your test scripts must change with it, or else the automation breaks.
Other products that aren’t well-suited to automated testing include: real-time interactive software, UI tools, VR/AR apps, creative writing apps, and video editing tools.
The reality is that most software needs automated and manual testing.
Automated testing is great for unit tests, ensuring all components making up the system are functioning properly, and it’s also beneficial for running a growing collection of regression tests.
Manual testing is essential for testing the unexpected. Automated testing is too prescriptive and simply doesn’t give you as much coverage and confidence as exploratory testing. Manual testing is also good for completing tests that can be done right now — without waiting for automation to be implemented. You can always "upgrade" those tests to automated versions as and when time, resources, and capabilities allow.
There’s no doubt automated testing can be helpful and speed up the testing process dramatically. But it comes at a high cost, and some products lend themselves far more to automation than others.
The truth is, you need humans — with their intuition and tenacity to try lots and lots of ideas — in the loop. Until automation can fully replace us, manual testing is your ticket to testing success.
Really, you don't need any tools. Just having someone try your product is better than nothing. But writing some test ideas down to encourage repeatability from release to release is a good idea. And the less prescriptive these ideas are, the better (unless, of course, it's for regression testing where you want to address the exact circumstances of a previous problem).
To add some structure, think in terms of checklists: features, functions, capabilities, and contexts. That way, you won’t forget anything you need and want to cover.
Excel is a perfectly good tool to get started. But — with obvious bias — Testpad is an even better tool. It’s similar to the simplicity you're looking for in a spreadsheet but is laid out perfectly for the job of writing down structured test ideas and recording results, release after release.