Many of you know that I spent a long time studying software testing. Over time, I’ve made discoveries - software isn’t made the same as it was 20 years ago, and my approaches to shipping high quality software have changed. I’ve learned that the destination (quality) is far more important than the journey (testing). Probably worth mentioning that this line of thought has led some people to say that I am “harming the craft of testing”. Interesting take, but if my choices are to either improve testing or improve quality, I’m choosing the latter every time.
We Got Problems
I don’t really want to rehash the anger-fest from 9 months ago, but a few points of clarity have been brewing in my brain-soup for a while, and today is as good as any to piss people off. My DMs, emails, and social media feeds tell me that more and more people in software are beginning to accept that developers are capable of testing. But a lot of those people don’t believe they should test.
Weird.
Here’s the Thing
Generally, I’m told there are two things wrong with developers testing their software.
The first argument is that developers lack perspective to test - implying that the dedicated tester is somehow better able to evaluate quality, or that a testing mindset is required. Quality is the value that the product supplies to the customer (or, as Weinberg has said many times, value to some person). That “some person” isn’t the developer nor the tester. It’s the customer.
In fact, Modern Delivery (nee Testing) Principle Number 5 says”:
We believe that the customer is the only one capable to judge and evaluate the quality of our product.
So the question really is, who is better at learning what the customer needs? The answer could be whomever is performing A/B experiments and analyzing customer usage patterns and error rates. The perspective needed in order to improve quality is the customer perspective. Debating on whether a developer or a tester can better provide that perspective is an argument with no answer.
To be clear, my argument is grounded in fast feedback loops - and I think fast feedback loops are the critical component to delivering high quality software. If you are unable (or unwilling) to invest in fast feedback loops for your software, having a team of experts attempting to evaluate what will make customers happy is as good of an idea as any.
Salt in The Wound
The second argument, which befuddles me, is time. I’m told that “Developers don’t have time to test”, or that their time is better spent developing software so that the experts can test.
Weird.
First off, testing is (mostly, at least) a requirement of developers at most semi-advanced software companies. Testing isn’t a separate activity from development - testing is part of development. Saying that there isn’t “time” for developers to test is like a chef saying that he doesn’t have time to taste his creations. This is also often known as an unemployed chef.
Perhaps better put is this quote from Scrum Mastery:
Similarly, I have met many who believe that you have to have a special mindset to be a good tester. Developers, they say, are a different breed. As such, they can’t be trusted to test. Again, this nonsensical point of view is almost certainly going to become self-fulfilling. The more that developers are not trusted to test, the more they will be unable to test and the more they will shirk the responsibility of writing good code.
To be fair, in some cases, I do believe that a dedicated tester can help accelerate delivery and improve quality by looking at the product as a whole (and hopefully basing their evaluation on customer feedback and needs) - but again, I’d only do this when fast feedback loops were impossible.
Think About the Good Times
Here’s a not-so-hot take. Developers should write ALL of the automation. I could tell you why anecdotally from my decades of shipping software to millions of people, but instead I’ll (once again) share from Accelerate.
Based on our analysis, the following practices predict IT performance”:
It’s interesting to note that having automated tests primarily created and maintained either by QA or an outsourced party is not correlated with IT performance. The theory behind this is that when developers are involved in creating and maintaining acceptance tests, there are two important effects. First, the code becomes more testable when developers write tests. This is one of the main reasons why test-driven development (TDD) is an important practice—it forces developers to create more testable designs. Second, when developers are responsible for the automated tests, they care more about them and will invest more effort into maintaining and fixing them.
I’ve had the opportunities to work with a lot of great developers who took testing seriously. Every single one of them has told me that writing comprehensive tests for their code has sped up their delivery. They had MORE TIME and got more done when they owned the vast majority of testing because they spent substantially less time doing re-work or fixing bugs.
The time argument is a big freaking dead end.
We’ve Been Here Before
There just isn’t a line anymore between testing and development. While I understand the argument that dedicated testers may be needed in some projects, I think in most cases there are more efficient ways to create customer value. The debate of whether developers should test their code is, in many respects, moot. The end goal is always to deliver quality software that meets the needs and expectations of the customer, and the ultimate success metric is the satisfaction of the user.
-A
Hi Alan,
in my software testing cariere, I noticed that testers would like to first jump into "Are we building the right thing?" before doing (just as example) integration testing with risky third party integration point.
In answering the question are we building the right thing, they suggest from architectural application changes, to changes in core application requirements.
This for me gives the impression that they want to point that they can take on hard questions.
When some of their suggestions are dismissed, then discussion threads start to pile, and we are burning our time (money) on those discussions.
But question are we building the right thing is a billion dollar question! Companies try to achieve it (and most fail) by pivoting their product in a number of iterations.
When dismissed, testers should put more thrust in company decision makers, and let customers decide are we building the right thing.
Note. I am software tester with all three BBST AST courses, and I have been practicing that for 15 years. I am also a developer, currently in developer lead role in Elixir language domain.