I know I’m not the only one getting a little tired of the AI hype. I’m mostly with team Nikhil on the topic after his post earlier this week, subtly titled, I Will Fucking Piledrive You If You Mention AI Again.
Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anything - or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here's a secret, they didn't do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists.
…which is absolutely correct.
I Think I’ve Heard This Song Before
History does indeed repeat itself - but maybe to not this extent. I remember when Data Science was the magic that was going to solve all of our problems. A decade before that, c# and the dot net framework were everywhere. “Powered by .Net” was a marketing phrase used far too often. I worked on a version of Windows CE (an early embedded OS from msft) that was branded “Windows CE .Net” for no other reason than weird marketing. We’re humans - we get too excited about things sometimes.
A lot of companies are taking advantage of the hype and slapping “AI Powered” stickers on their snake oil hoping their customers will write big checks to have a piece of the magic.
The Skeptics
I’ve been amused at the people who write or post about Chat GPT gives them answers that are factually incorrect- hence Gen AI must be useless.
LLMs are very good at some things, but not so good at others. Let me make an attempt to explain why (Jason, I’m awaiting your corrections).
An LLM “learns” from consuming massive amounts of data and trying to understand how it’s put together. It knows that fairy tales usually start with “Once Upon a Time” (because it’s “read” a million of them), but they may give you a confident answer if you ask what the capital of Mars is. LLMs know patterns, but not all of the rules. They’re frequently wrong when there’s not enough information, too much information, and they’re not great with numbers since they work on word patterns and sentence structure.
Honestly, if you’re reading this, you probably know all of this. So let’s fast forward.
I Will Piledrive You if You Ask Me About AI in Testing Again
OK - not really, but I get asked a lot about how AI will impact testing. What they really want to know when they ask, is if AI will take away test jobs. As I’ve said a million times, AI won’t take jobs away, but people who know how to use AI effectively just may.
Very early in the AI super-hype madness, I said that GenAI is great, but the real excitement will come when people build practical solutions that leverage AI - and I think a lot of companies are still struggling to find their “big thing”. But sometimes, just a simple idea nudged forward with a little AI is the solution. For example, check out this short demo of Zenes - a nice little utility from the folks at Autify.
If four minutes is too long for you, here’s a recap. They drop a requirements doc into their tool (words), then convert that to test cases. Because they recognize that LLMs aren’t that smart, they give you the opportunity to edit, or remove these generated test cases. Then they create gherkin scripts for those and finally generate playwright to automate those tests.
It doesn’t change the world, but it does save a crap load of time. I’ve worked on developer tools for most of my career, and most often, the simple tools are the most impressive.
The Question
If your product is a web page, why would you test it for functional correctness any other way? I’m genuinely curious. I fully believe that developers should write the vast majority of automation, but a lot of them push back and say it’s too hard. Not anymore. If you’re a testers stuck writing a bunch of Selenium all day…maybe don’t. There’s probably more interesting testing you could do.
The Adjacent Possible
I’m tired of Chat GPT, and I’m sick of the AI hype. Granted, I use Chat GPT every day, and probably couldn’t go without it, but what I’m tired of is people talking about how they use Chat GPT. I am waiting to see more simple and useful tools built using language models rather than the magic that the snake-oil AI salesman are selling.
One of my favorite books is Where Good Ideas Come From by Steven Johnson. In that book, Johnson discusses The Adjacent Possible. The Adjacent Possible is a concept that describes the potential for new ideas and innovations that lie just beyond the current boundaries of our knowledge and capabilities. It suggests that creativity and progress occur by exploring and expanding the immediate next steps that are accessible from where we currently stand. Rather than making giant leaps into the unknown, the adjacent possible involves taking small, incremental steps that build on existing ideas and technologies, gradually pushing the boundaries of what is achievable.
I think a lot of the hype of AI is trying to skip the next steps of the Adjacent Possible and trying to make a giant leap forward instead.
And that’s probably why I want to piledrive it.
-A 8:0
If folks feel this way..it’s probably not for them, and that’s OK 😆
Apt SOTA LLM description. 🤷