9 Comments

Thanks, Alan. This is an excellent article that I agree with. I especially like the point that real feedback comes from real customers and not proxies. You've mentioned some examples where it could be hard (or illegal) to depend on real customer feedback or impractical to use that feedback to make improvements.

It would be interesting to discuss approaches to deal with these kinds of feedback loop challenges.

Expand full comment

Have there been any studies about developer-led test automation besides the one in Accelerate? I really want it to be true, and validated by more data. I'm hoping this doesn't turn into the same kind of argument as the 10x increase in the cost of fixing bugs that was never corroborated.

Expand full comment

Alan, where in the book does this come from?

In Accelerate, Forsgren et al found high correlation between developer owned automation and quality - and no such correlation between dedicated tester owned automation and quality.

All I was able to find was

Developers primarily create and maintain acceptance tests, and they can easily reproduce and fix them on their development workstations. It’s interesting to note that having automated tests primarily created and maintained either by QA or an outsourced party is not correlated with IT performance. The theory behind this is that when developers are involved in creating and maintaining acceptance tests, there are two important effects. First, the code becomes more testable when developers write tests. This is one of the main reasons why test-driven development (TDD) is an important practice—it forces developers to create more testable designs. Second, when developers are responsible for the automated tests, they care more about them and will invest more effort into maintaining and fixing them.

Forsgren PhD, Nicole; Humble, Jez; Kim, Gene. Accelerate (p. 83). IT Revolution Press. Kindle Edition.

Expand full comment

Good article. Let me start by saying the question about whether you NEED a dedicated tester is not answered with a yes or no answer.

Your belief is most likely correct in your context. I believe industry type as well as company maturity plays a role. In a hyper scaled company with teams who only build text box components I agree a tester wouldn’t do anything as this is a mechanical checking and not testing. The crazy part is placing a test resource in this position and being surprised they are not valuable lol. In these cases if you were to hire a tester you would position them most likely on an integration team. You would give them access to the customer and the development team. They would also need to be experienced enough not to cry wolf every time they see a minor issue. They need to know how to drive the developers to make quality decisions. Blah blah…..

Maybe we can’t use the word tester anymore as people get so worked up about it. If you feel hiring and positioning a resource in a place where they can provide test/qa/engineering value then do so don’t follow others and blog posts.

I also wonder what’s truly harder to do write code that works in one way or test that it works in all ways a customer cares for? I think it’s likely AI will write most code and test most mechanical pieces in the near future. Only testing will be left so I guess developers will become testers and testers will be testers :)

My career has been in both testing and development and these are my thoughts. They are correct in my experience and so are your thoughts in your experiences/contexts.

Cheers

Expand full comment

Alan, yet another fantastic article pulling together a lot of thoughts I and other SDET/QAs have had while adding the perspective of the other side of the wall.

I had tried to for myself, and also for others, to move more into a quality coaching role, but it never pans out. Do you feel that engineering leadership sees what you see in the industry generally? Also, do you think the leadership in engineering and business will allow for the time needed for their engineers to do testing/automating?

Expand full comment

Alan, I've long had a deep respect for a lot of the work you've done and published over the years, but I'll admit I like you discovered most of my testing skills, by learning and growing as I encountered ideas. I never really had a 'trainer', though I may have seen something for three hours in a Software Engineering Course in College (which ironically, matched most of what I saw in an formally ISEB training Course I once saw delivered for reading and use for free in a company I worked at. It brought ideas to remembrance, but didn't teach me much new.)

I am in a similar camp as you, I feel that to test well I need to be in there beside the developers building, and showing the kinds of testing that can be done, to help lead and teach others through experience with new ideas. What I lack recently is the kind of confidence you exude so boldly in this post. I wondered, when you said you've trained thousands of testers, what that number might be closer to be? Is it just over 1 Thousand, Ten thousand, a Hundred thousand, or several hundrend thousand? I find myself hanging up on that claim, because according to a quick Google Search, the suggestion is that there are between 22 MIllion and 26 oor maybe 30 Million Software Developers today. (My rough back of napkin math of say 100K / 23 million, is something like 0.43% of all Software Developers in the World.)

You mention that you feel a majority of companies, but I wonder how you can be sure you've not just been lucky to be in situations where people were willing to accept and learn things? I've met many who have found great resistance to new ideas in testing, no matter who would end up carrying the ideas into the dev cycle. I have worked in a wide variety of shops, small startups, small teams within larger companies that contract for others, and at least two that were what I'd call leading edge companies. I found different practices, and willingness to adapt in each of them, and this is where my brain is itching a little. If testing practice, good practice, whether done by a dev or tester, is more likely to be adopted by leading edge companies, If the leading innovators are only say 5% of the companies, and early adopters add maybe another 15-20% that still only accounts for 1 quarter of all the companies that build software.

SO naturally, I find myself questioning, what the number of people you've had the fortune to train, in the few companies you've had the privilege of working with, are enough to say what practice needs to be for all the industry. (And I say this knowing, that a lot of good ideas take more than 2 decades before an entire industry may adopt them, if not longer).

Again, I feel like I sit in a similar space, I'm back to calling myself a Software Engineer, because testing is just a very strong subskill I bring to the development work I do. I honestly feel anecdotally anyways, that I can bring more value, if I'm buildinjg and testing just like all the other developers. Some of my Testing friends may chaffe that I feel the same, but I've seen how quickly big Fortune companies cut people yearly, cyclically without any real seeming care about product quality, or what it does to morale of the people left behind.

I also feel that a number of folks, are getting tired of being guinea pigs of corporate America. I agree there has to be a balance here, and some of the ideas you mention, I hope to dig more if you write more about them, are just the top of the ice berg in what I think likely is needed in the industry as a whole. I hope I haven't stepped out of bounds in asking some of these questions, but like I said, I'm working to better understand who I am, and where I can continue to be valuable in Software, and topics like the one you posted, are right at the center of all the learning I'm doing right now, and I can't get enough of it.

Thank you for taking time to share. We are all better because of it.

Expand full comment
author

Hi Tim - thanks for reading, for the kind words, and for the comment.

To be clear, my strong opinion is that most teams don't *need* dedicated testers - meaning that for most software built today, that developer owned testing, plus fast feedback loops that collect data about customer usage and data is a better option than dedicated testers. Of course, if developers "don't want to" test or "don't know how" to test - or if the team has not learned how to leverage customer data in this way, they certainly shouldn't get rid of their testers - I've seen that happen, and those teams / companies have failed miserably.

I'm just saying that for most teams, I think it's possible - and the only places where it is NOT possible is in teams where a a fast feedback loop with customers is impossible.

A lot of people think that getting customer data is "making the customers do all the testing" - which is not true. Functional correctness is 100% up to developers. Any bugs/issues/disappointments encountered by customers should be things that impact the value they're getting - not broken software.

I never said we (as an industry) are there today - but I am for certain saying that it's possible, and that it's a more efficient approach.

Expand full comment

Such a great read. So many organizations aren't at this part of the bell curve, but I enjoy trying to help good orgs move in that direction. (And some orgs just won't move, for whatever reasons.)

Terrific post with a lot of great nuance driven by your successes. Thank you!

Expand full comment