A few weeks ago, I talked about some parallels between how a lot of folks are trying to ignore or dismiss AI the same way they tried (and continue to try) ignoring how Agile changes the way we (can) deliver software.
Then Friday, Paul Goade tagged a me and a few others on LinkedIn, and asked:
Hey Software Testing / Quality Engineering Community - seriously, how is AI helping you with your testing (or is it)?
Oh Boy.
James Bach (one of the also-tagged) just wrote about this - sort of in this post - and I think he makes good points. What the answer really comes down to is how you treat the AI - or in this case, Chat GPT. It’s easier to make it misbehave than make it helpful, but that blame goes on the user, not the generative AI tool.
To answer Paul’s post, let me see if I can draw from the themes from my recent post: Dismissal, Fear, and Collaboration.
Dismissal
Probably the easiest thing to do as a tester (or any knowledge worker) is to ignore how AI can help you. My linked in feed is filled with questions like:
I asked Chat GPT to create a Test Plan and it created shit
I asked Chat GPT the difference between black box and white box testing and it was wrong
I asked Chat GPT why I should use code coverage, and I disagreed with its answer
etc.
I’ll summarize by stating the obvious - if you ask an AI questions that an AI isn’t very good at answering, it will give you answers you don’t like. If you’re truly stuck here, you need to move on and get to the new stuff.
Fear
You could also go the other direction. Chat GPT is pretty darn good at creating unit tests and tests for functional correctness. I can feed it some HTML, have it generate Page Objects, and then have it write a suite of unit tests that offer reasonable coverage in a fraction of the time it would take by hand.
If your job is to look at HTML, create page objects and then generate reasonable test coverage, then you may be fearful of AI. In my opinion, however, if that’s your job, you should probably have already been a little worried about your job longevity.
It reminds me a little of my first day at Microsoft, when I was given a spreadsheet (yes, a freaking spreadsheet) of tests. They were a bunch of UI verifications of networking components. I asked if there was a deadline when they should be automated, and my manager said, “Oh no - we don’t have time to automate - we need to make sure these are run every day”.
I had those tests automated three days later, and used my new found free time to test things that were interesting.
Collaboration
Now, on to Paul’s question. How is AI helping you with your testing? I don’t test as a daily activity by any means, but the ideas are endless.
Years ago, I wrote a simple app called Numberz as a simple app for testing (Windows only - sorry). The elaborate(!) description of the app is:
When you press the “Roll!” button, the app generates 5 random numbers between 0 & 9 (inclusive), and sums the numbers in the “Total” field. The stakeholder’s primary objectives are that the numbers are random, and that the summing function is correct.
I gave Chat GPT this “spec” and asked it for a testing strategy. I won’t paste the entire output, but it included verifying functional correctness, as well as testing for randomness. This is an important part of testing this app that a surprising number of folks forget. Testing for randomness is hard - and worth a deeper discussion at some point.
I was pleasantly surprised to see the output include sections on why randomness is difficult to test, along with some tips on testing for it. In this case, we are using Chat GPT to find any holes in what we may have been thinking already. We’re not looking for answers, we’re looking for ideas.
Worth noting that I asked Chat GPT to create an html / javascript version of the app for testing, and it did it with no problem (and wrote tests as well). Additionally, it called out that random number generation is not cryptographically secure and should not be used where high-quality randomness is required.
That’s not super-complicated, but it’s just the beginning. I gave Chat GPT a paragraph full of information on what I’ve tested on a fictitious e-commerce site, and it called out additional areas to explore. I asked it for specific examples of strings I could use for SQL injection or XSS, and it gave accurate responses. I had a lot of this memorized at one point, but using AI as an assistant sure is nice.
Mastermind?
My takeaway? I don’t think anyone should be ignoring or dismissing AI. But - it’s nothing more than a tool to help accelerate and enhance knowledge work. In fact, it’s pretty good at doing the boring parts (generating simple tests), and is a reasonably worthwhile partner in exploring ideas and getting feedback. Sometimes it may give you feedback you don’t agree with or don’t need (this happened to me in every case above). When that happens, it’s ok to ignore the parts you don’t like - AI won’t get mad at you. It doesn’t have feelings, and it doesn’t have critical thinking. Use it for what it’s good at doing.
So, yes Paul, I think AI is a boon for testers - as long as they remember that it’s just an enhancement and accelerant for what their own brains do.
-A
I agree. ChatGPT is good in ‘conversation’. Much like us, if give a question without context, it will answer with what it is processing not what the brain behind the keyboard is processing. The more interaction between that brain and the AI, the better the results. It is similar to listening. Go figure.
Great thoughts - thank you, Alan!
Keep using those brains, folks!