Don't Blame Me
the post on whether or not dedicated testers are needed that I said I wouldn't write
I shouldn’t have been so naive to think that people wouldn’t freak out over a statement I’ve used for seven - maybe eight years now.
Most software teams do not need dedicated testers anymore.
mentioned in
Inevitably, when I make this statement, a lot of people nod their head and say, “yeah, that makes sense.” Conversely, some people think it’s the stupidest thing they’ve ever heard. Also worth mentioning is that there’s a high correlation between the latter group and people currently in testing roles. Dig in and hold on.
Let’s explore the topic.
When?
Note that I used the weasel-word (see what I did there) “Most”. So, let’s look at extremes. If I’m a one-person dev shop, I can’t have a dedicated tester, because then I wouldn’t develop anything. Would the second person I hire be a tester? Probably not, but it depends on the product. I’ve never worked on medical equipment or anything for NASA, or anything that can potentially kill someone - those products probably need dedicated expert testers with specific knowledge to make sure people don’t die. I’m not going to even attempt to hypothesize how those products are built and tested, so if you’re working on something from that category, carry on as you’ve been doing. However, I hope we can at least agree that some products don’t need dedicated testers, while some products to.
Edit: I’m not saying that non-lethal software isn’t important - I’m saying that we have the ability to experiment, make (small) mistakes, and leverage quick learning from other types of software products. Fast Feedback Loops a critical part of my premise. My argument, in a sense, is that most software can take advantage of fast feedback loops to deliver higher quality software, faster. Please read on, even if you’re mad already.
What?
Now we have to elaborate on what we need testers to do. In Accelerate, Forsgren et al found high correlation between developer owned automation and quality - and no such correlation between dedicated tester owned automation and quality. My experience in teaching many hundreds of developers how to write good automation is on point with this data as well. Yes, I know, at least some of you reading this are testers who write automation all day, and I’m telling you that it’s not helping nearly as much as you think it is.
A chunk of the rest of you (not including the people who have already deleted this mail or closed this tab and unsubscribed already) may be ok with that, because you don’t want to write automation, and you’re happy being “the voice of the customer”. Well…you are not the customer. No matter how hard you try to advocate for the customer, be the customer, talk to the customer, hug the customer, or dress up like the customer, only the customer knows if the software you’ve given them solves a problem that they have in a way that is satisfactory (or enjoyable) to them. Quality is value to some person, and that person isn’t you. It’s the end user.
Yes - I know you want to make sure that customers don’t receive crap, and I applaud you for that. But functional correctness is the responsibility of software developers, and it doesn’t help in the long run if you are cleaning up after lazy developers.
That’s a lot of rambling, but the recap so far is that you’ll get better software quality if developers own functional correctness of their software and write their own automation. Also, only the customer knows if you’ve solved their problem.
Pause
As a reminder - above, I said that most development teams don’t need dedicated testers. I didn’t say they don’t have them.
What we have today in many teams is a frightening co-dependency between developers and testers. This occurs on siloed org as well as “embedded” testers in whatever-flavor-of-agile you use. A developer writes code, then expects (at some level) that the tester will find bugs. The tester finds bugs (validation), and then the developer thanks them for that validation.
It’s gross. And yes, I’m simplifying, and I know that lot of teams have actually figured out a whole-team approach to quality, but if quality suffers when the test specialist is on vacation, then you don’t have whole-team quality.
Now What?
Let’s reflect yet again. I probably haven’t described your team, but I’ve hinted at a common way that high quality software is developed (with a few details removed, but we’ll get to them). I know you’re thinking, our developers don’t WANT to write tests, or I’m BETTER at writing tests, or I DO know what the customer wants.
Take a moment and answer this: If the developers on your team wrote the vast majority of test automation and if you had a way to know if your software was solving customer problems, would your team need dedicated testers?
Three Things
I’m betting most of you said, “Yes” to my question above, but let me assume you answered that way for non-job-preservation means. Honestly, you’re sort of right, but let me fill in the gaps with what needs to happen for this to actually work in practice.
Someone needs to coach the developers on getting started with automated tests. Otherwise (and I’ve seen this) even your most experienced dev will write crap tests that you and I would embarrassed to look at. A VP once asked me how his developers were doing at testing. I answered, “They do all the testing they know how to do” - which means that they fully embraced testing, but they lacked…nuance. The good news is that I’ve taught 1000s of developers how to write good automated tests, and you can too.
It’s worth noting that it’s not just automation. Developers can do pretty damn good non-automated tests as well. But better yet - they often end up designing code that needs less non-automated testing.For some products, there may be some tests a dedicated tester may write. Certainly not end to end workflow automation, but deep testing around database integrity, or performance suites may fall outside of the expertise of even the best developer-testers.
For either of these first two gaps, someone could do this across multiple teams fairly easily. Or, someone could write these tests along with other development or infrastructure work as part of a more generalist role.Apply lessons from The Lean Startup by Eric Ries. This book is where I really learned the value of feedback loops and customer data in assessing quality. Ries says (paraphrased), “you don’t get any value from your engineering effort until your software is in the hands of customers”. Even desktop app developers can add metrics that can answer questions about customer success and uncover errors that you could never find in traditional testing within a fast feedback loop. If you’re working on a web page or web service, you can update, monitor, and redeploy dozens of times a day in order to make sure you’re learning and adjusting to what’s working for customers. This isn’t “making the customers do your testing” - it’s understanding how customers are using your software and what their experience is.
A long time ago, I wrote a little about this in a post called Stop Writing Automation. For advanced information on “real” testing, check out Ronny Kohavi’s book on Trustworthy Online Experiments or Lean Analytics by Croll and Yoskovitz.
Edit: Escaping The Build Trap by Melissa Perri is another fantastic resource to help you understand your customers.
The Other
Given the stream of consciousness I usually write in, I’m sure I left some loose ends (and I expect at least a few of you will call me out on those).
One point, however, that came up in a Linkedin discussion that’s worth addressing here is the idea of “my butt don’t stink”, or as some put it, “critical distance”
Based on the work in Accelerate, what I’ve seen working with many, many software teams, and conversations with a lot of people delivering critical software to massively large audiences, is that, in software development, critical distance is a crock of shit.
I’ve worked with thousands and thousands of developers in my career. I can name the TWO who thought their code was always perfect. One of them was nearly right, and I took joy when I was able to help the other see the error(!) of their ways.
In reality, developers can become excellent testers, and they enjoy testing. With a straight face, I can say that number of developers I’ve worked with are better testers than 95% of the testers I’ve ever met - and they LOVE testing. They have zero problem taking the code they wrote and doing everything they can to make sure it works - and that even in the worst of edge cases that it doesn’t fail. In fact, I remember more than a few times when I’ve had to coach a developer away from testing for minute edge cases - partly because the ROI wasn’t there, but mostly because we could just add measurements to alert us if that thing ever happened in production.
Where Are We? / TL;DR
As is often the case with me, I write the long answer before I am able to write the short answer. If the question was, “Alan, can you elaborate on your statement that most teams don’t need dedicated testers?” - here is why:
Some software will always need an expert tester in order to satisfy user needs or compliance.
Development teams should be writing all (or the vast majority) of all test automation. This is backed up by empirical research.
Developers can be fantastic testers (and are perfectly capable of testing their own code) - but often, they need some coaching and helpTesters have no direct way of knowing if the software they’re working on solves a customer problem in a satisfactory way. Teams can use telemetry (remote metrics) and fast feedback loops to get this information.
As much as I wrote this to clear things up, I’m sure I’m in for a big dump of comments.
Happy New Year everyone.
Thanks, Alan. This is an excellent article that I agree with. I especially like the point that real feedback comes from real customers and not proxies. You've mentioned some examples where it could be hard (or illegal) to depend on real customer feedback or impractical to use that feedback to make improvements.
It would be interesting to discuss approaches to deal with these kinds of feedback loop challenges.
Have there been any studies about developer-led test automation besides the one in Accelerate? I really want it to be true, and validated by more data. I'm hoping this doesn't turn into the same kind of argument as the 10x increase in the cost of fixing bugs that was never corroborated.