Delayed this week - I thought I posted…
Long time readers (thank you all!) know that I mostly write about leadership topics here, with some occasional forays into general software development and hiking stories. You all also know that sometimes I write about why I don’t think most software teams need dedicated testers.
But - I realized that I have never really written about how to make that transition. We’ve all heard of companies who have removed testers, had zero plans on how to make the transition work, and failed completely. This doesn’t necessarily mean that their team didn’t need testers, they most likely failed to recognize how much testing they needed.
Step One - Don’t
It’s been ~10 years since I delivered a talk called “Testing without Testers” In that talk I mentioned this article about Yahoo.
…which reminds me of the Brent Jensen quote, “Don’t rip off the band-aid, because sometimes it’s a tourniquet”
A lot of companies saw the improved cycle time and quality from teams without testers, and blindly (and dumbly) made a move to get rid of their testers without a plan. Most of those companies either failed, or frantically re-hired their testers.
Perhaps the bigger problem is that other software teams saw the failure, and they blindly (and dumbly) inferred that testing without dedicated testers was a fools errand.
They’re wrong too.
Some Context
I’ve been a part of dissolving several test teams over the last decade. In fact, I’m confident in my ability to improve velocity and quality while removing testers**, that it could be my niche retirement job (** worth noting that when I “dissolve” test teams, I’ve never had anyone fired or layed off - I’ve found them new roles where they can leverage their skills in new and helpful ways).
Start With The (inner) Loop
Now that I’m writing this down, I have a hypothesis on the problem. If companies are looking to improve velocity and feedback loops by removing dedicated testing specialists, they have a pretty good chance of pulling that off. Companies that see eliminating roles first will most likely fail.
Start with the feedback loop. If a development team delivers a feature to a test team - who then write automation and do exploratory testing before returning results to the development team, that’s a long time to wait for feedback.
Yes, as the test team’s frameworks improve, and as communication improves it can get faster, but feedback loops are where speed matters. Someone on linkedin last week was ranting on a straw-man argument about velocity. Their points were technically correct if the goal is velocity, but the goal is not velocity - it’s feedback.
Hand-offs are often bottlenecks. Reduce the hand-offs, or at least reduce the friction between hand-offs. Research shows that automation owned by the dev team has a high correlation with quality (and that no such correlation appears when a separate team writes automation). It doesn’t matter if a separate team is better at writing automation today, developers should own automated testing. To get them there, pair with them, coach them, review their test code, and help them learn. In my view, both being unwilling to learn, and being unwilling to transfer specialized knowledge are short tickets to job loss.
In my experience, testers who have focused on automation, transition well into roles in development - especially in platforms/infrastructure. Personally, I’ve written a lot of diagnostic and debugging tools to help speed up and simplify some of the most tedious tasks in software development.
Accelerate the Achievement of Shippable Quality
Rely On The (outer) Loop
The inner loop is what you can do inside your dev team. It helps with code correctness, and functionality. But - as Eric Ries has said that you don’t get value from your engineering effort until it’s in the hands of customers - and he is correct. Fast feedback on the customer/outer loop is critical - and valuable.
Once again, the linkedin testing pundits often don’t understand the difference between customer feedback (via metrics) and “asking the customer to test your broken software”. At the very least, add enough metrics to your web site or application that you can understand which features customers are using and the errors they’re seeing. Track how much time they spend using features, or how quickly they leave your site or close your app after using a feature. I remember learning at one time (via metrics) that the most common used feature in office apps was Paste. The most common feature used after Paste was Undo Paste. I know there have been times (decades ago) when I, as a tester, highlighted that a feature probably wasn’t going to be as useful for users as the product team thought. With no data to back me up, the feature shipped, and customers didn’t think it was useful. By the time we got the feedback, I was on a different team, and an “I told you so” didn’t matter.
Uno Reverse?
Feedback doesn’t need to flow just from customer to development. In fact, there’s a huge benefit to letting customers know when their feedback leads to improvements or new features. Many (most?) companies underuse this tactic, but closing the feedback loop creates stronger customer loyalty and encourages further engagement. Even something as simple as product update notes or thank-you emails can make a huge difference in how customers feel about their role in your development cycle.
When a customer sees that a bug they reported - or especially a problem they saw and didn’t report - has been fixed in the latest release notes, they’re more likely to provide feedback in the future.
But There’s More…
There’s a lot more to quality than testing - even when done by developers who are expert testers. If you need a reminder (and unfortunately, some of you do), the goal is Quality not Testing.
So if you want to think more about how to create a quality culture where developers do great testing and quality is the goal, you can start with the Quality Culture Transition Guide I wrote a while back. If that piques your interest, Janet Gregory and Selena Delesie expanded on the Guide and wrote a fantastic book called Assessing Agile Quality Practices with QPAM.
It's in Accelerate, by Forsgren, et al. Chapter 4
"Research shows that automation owned by the dev team has a high correlation with quality " - any links to this research I can share with people who are still bringing on an automation team to follow the test scripts that they want us to write 🤮