How you do QA is probably terrible.
Of course I don’t know you, but most software development shops that I’ve worked on or chatted with “do QA” in a way that is pretty terrible.
Almost everyone does it, and almost everyone considers it a “best practice”, and yet somehow it never even got a name. I’m going to call it “block-testing” because it’s the kind of testing that unfortunately makes continuous deployment and continuous delivery fundamentally impossible.
It’s the practice of having humans try to regression-test an entire product or some high-value subset of features across the entire product usually with the goal of finding reasons to stop it from being released. The most extreme (worst) case of this being when a specific role exists solely for this purpose (almost always given the misnomer “QA”).
Just to be super-duper clear: I’m aware that almost everyone is doing block-testing. That doesn’t mean it’s a good fit for every organization though. If you actually want to achieve continuous deployment or continuous delivery, this is one of the big-batch operations that you’ll need to forego.
Alright a couple of those things are obvious to most people I think so let’s just dig into a couple of the less obvious ones:
Hiring people whose sole purpose is to find reasons to stop a release is counter to the goals of a company that is trying to release software quickly. Of course block-testers are actually tasked with the important goal of making sure each release is high-quality (and that’s a critical goal!), but the end result is that as block-testers, they naturally tend to optimize their role to “trying to find release-blockers”. They’ll log other defects too of course, but those don’t have the same importance to them (or anyone else) as issues that block releases.
Also when you’ve got one group of people responsible for finding fault in another group’s work, you’re going to have conflict between the groups. Engineers are going to start saying that the bug isn’t theirs or the bug isn’t “that bad”. Testers are going to say the opposite. Each group is going to blame the other when management wonders why releases are delayed or infrequent. You’ll get the same finger-pointing when a defect gets through to production as well. The ensuing arguments are a massive waste of time and each “side” becomes less empathetic to the other’s plight over time.
I’ve heard the words “You’re the tester! Test it!” a few times in my career. The sentiment is that development and testing are distinct phases rather than something that happens iteratively during development, and “the developers do the developing and the testers do the testing”. “This frees developers up to just focus on just development”.
Nothing really could be further from the truth and I have some serious doubts about the experience of any developer that thinks that they shouldn’t be testing their own work along the way as if they can hammer out 15 or more LoC and won’t have to bother to try and see if it works. The fact of the matter is that quality is entirely the responsibility of the developer and any successful developer is testing constantly as they go. Conversely, the tester’s only role is to assess the quality. The tester has no responsibility whatsoever to improve it.
At the same time, the very presence of the tester is a very confusing signal. It signals that the developers aren’t wholly responsible for quality (and maybe even that they’re not trusted with it!). It diminishes their skin in the game, and disincentives quality. Of course a tester-only role has no ability to actually fix defects, and you can’t sanely hold someone responsible for something they have no control over.
I would wager that this division of responsibility is the single greatest cost of block-testing. The testers and the developers start to engage in a kind of game of ping-pong where the developer tries to see what they can get past the tester, and the tester tries to find ways to block the release for any reason at all. I’ve seen a feature go back and forth from developer to tester more than a half-dozen times before the tester agrees that it’s okay to release. That back-and-forth often plays out over days through an issue tracker for trivially-fixed-but-important issues in high value features. If you’re looking closely for this kind of activity you can see it all over the place, and it can look about as effective as a football team that uses telegrams to communicate mid-play. These hand-offs are all classic waste from a Lean perspective.
There are many other possible solutions to ensuring quality, many of which are faster and more comprehensive.
Once you realize the harm, the cost, and the inadequacy of block-testing, you naturally have to ask, “but is there anything better?”. You can’t really fault an organization for block-testing if there are no alternatives.
It’s really unfortunate that the industry has resolved itself to the belief that you have to either go slow or you’ll break things, and that quality and speed are an inevitable and universal trade-off. This is a really limiting belief.
In order to get creative, let’s forget the “best practices” for a minute and get back to first-principles: Here’s what we’re trying to do:
Preventative measures are anything that you might do to prevent a defect from affecting a user. They’re a pretty critical part of achieving goal A.
However, goal B means we really have to be careful what steps we put between the developer and the production environment. Ideally they’re critically important and they’re fast. Each of these steps has to pay for themselves. Because of this, you have to be really careful about what preventative measures you choose. Some really fast preventative measures: static-typing, automated tests, broken-link-checking spiders, linters.
You also have to start thinking about how to achieve Goal A with more than just preventative measures. Preventative measures aren’t everything. Even the most ardent block-testers have non-preventative measures: feedback from the end-user (even if it goes through 5 levels of support first). I’ll call these post-deployment measures “mitigating measures”. There are 3 main categories for the types of mitigation:
Hopefully those categories get your imagination started. User feedback is a mitigation measure; it’s just the slowest and most expensive one. Here are a few that do better:
Of course, you will probably want to do these things even if you never eliminate block-testing from your release process. The prevailing practice seems to be to forego these things and expect the human block-tester will catch everything though. That’s a kind of bury-your-head-in-the-sand quality management.
If you try these things out though, you’re almost certainly going to see that these practices are cheaper, faster, and more effective than block-testers though.
Automation IS expensive! I have no argument against that. It is. I’ve seen numerous well-covered codebases have more test code than application code, so the upfront costs of automated tests are probably close to the same as writing the application code itself.
Over the long haul it pays for itself though. A well-covered codebase easily has thousands of tests each getting run thousands of times a year, so we’re talking about millions of behaviour verifications on the conservative end of things. Humans simply cannot compete with that. Humans are slow, and even the best of them are terrible at repetitive detail-oriented tasks. As expensive as automation is, block-testers are even more expensive.
They’re really not. Many haven’t had much practice because they’re working in block-testing environments where the responsibility is muddled, but they’re actually quite good at it with a little practice when they have the full responsibility and trust for it. Don’t be surprised when they don’t choose manual-testing for the solution to every quality problem though.
Block-testing is so common in the industry that many people have a really hard time understanding the place of specific testing personnel in the SDLC without it. There's absolutely a place for specific testing personnel, but they've got to start contributing to improving quality beyond block-testing, and that change is understandably difficult.
Compared to computers, humans really are slow and terrible at repetitive detail-oriented tasks. However humans have well-known strengths that computers do not: they’re creative and curious.
So there are still places for manual testing:
With that said, I’ve been working with multiple teams for the better part of a decade that have no manual tester role at all. When they do test manually, the developers are testing things themselves, often as a group (mob testing!). You’ll probably want to at least consider eliminating the role of manual testing entirely.
There’s still a place for quality-minded people other than manual testing though. In fact, the concept of QA and block-testing are really opposites. Actual quality assurance is about making sure that quality is baked into the process from beginning to end. Personnel that just do block-testing are not doing that at all. At best they’re quality control (QC) and that’s a far less valuable role. QA would be involved in a bunch of completely different concerns: How can we prevent defects from ever existing? How can we find them faster? How can we mitigate their impact? How can we recover from them faster?
Here’s a laundry list of possible Quality Assurance tasks:
The economics of block-testing make more sense if you anticipate very few releases with increasingly smaller differences. Agencies write this kind of software, but I haven’t personally done this kind of work in over a decade so I could be convinced of the economics either way by someone with more recent and extensive experience . Here I’m specifically talking about teams working on a software product that exists over a long period of time.
Okay! I believe you! There are rarely one-size-fits-all practices in software development. I’m simply submitting this counter-argument for consideration. I’ve certainly worked at and heard from many organizations that should strongly consider stopping block-testing because the value proposition for them is just not there.