Bring Your AI
you got rid of the wrong thing
Earlier this week, I wrote about what actually separates good interviewers from good software engineers, and why the LeetCode whiteboard trap selects for memorization over-thinking. If you missed it, the short version is this: the candidates who stood out across fifteen years of interviews weren’t the ones who knew the most. They were the ones who leaned in when they didn’t know something, asked good questions, and couldn’t stop talking about what they were learning.
That post was about diagnosing the problem. This one is about what to do instead, specifically now that AI has made the problem impossible to ignore.
You’re Testing The Wrong Constraint
A lot of companies are adding friction to keep AI out of interviews. I hear about interviews using lockdown browsers or demanding screen sharing. I recently worked at a company that insisted that everyone had to interview in person, even though the roles were remote (which is an interview training problem much more than an AI problem). The assumption baked into all of it is that the “real” engineer is the one who can produce correct code from memory, unaided, and on demand.
That person doesn’t exist on your team - and that person doesn’t really exist anywhere anymore. The actual job you’re hiring for is the same as it’s always been. Take messy, ambiguous problems, reason through them clearly, and then use every tool available to get to a good solution. So why are we screening for something else?
The question above is rhetorical, but my personal opinion is that too many software leaders live in a world where “that’s the way we’ve always done it” trumps any real concerns. I talked about That’s The Way We’ve Always Done It at length here.
Turn It Around
Here’s what I’d do instead. Give candidates real problems. Tell them to open Claude Code, Cursor, Copilot, whatever they actually use. Have them share their screen.
Then watch.
The first five minutes will tell you almost everything. Before a single line of code exists, what does the candidate do? Do they immediately start prompting, or do they pause to ask questions or frame the problem? Do they ask clarifying questions about constraints, edge cases, or what done actually looks like? A strong candidate treats AI the way they’d treat a senior colleague. They give it context before asking for help. Weak candidates dump the problem verbatim into a prompt and wait.
Watch how they write the prompt. Are they vague or specific? Do they specify what they want and what they don’t? Prompting well requires the same skill as writing a good requirements doc or a good design brief. It’s communication, and you’re watching them communicate.
Then, see what they do when the code comes back from AI. A candidate who copies the output without reading it is showing you how they’ll behave on the job. A candidate who reads it critically, who asks whether it actually solves the problem, checks for edge cases, and notices when the model made a bad assumption, is a different person entirely.
The ability to evaluate AI output is a core skill now. Test it directly.
This Isn’t Just For Engineers
The same principle applies across every professional role. AI fluency isn’t a technical skill anymore. It’s a thinking skill.
Consider a product manager interview. Give the candidate a poorly defined feature request and ask them to use AI to help draft a requirements document. Do they accept the first output, or do they interrogate it? Do they push back when the model makes assumptions that don’t reflect the actual user? A strong PM brings judgment to the process. A weak one outsources the judgment along with the writing.
Same with a marketing role. Give the candidate a real campaign brief and let them use AI to develop positioning options. What you’re watching for isn’t the quality of the AI output. It’s whether the candidate can evaluate which direction is actually right for the audience, why the other options fall short, and how to take the raw material somewhere useful. That’s the job. You’re just watching them do it.
The Signal You’re Actually After
This connects back to what I was looking for across hundreds of hiring interviews: thinking, judgment, and the genuine curiosity to engage with a problem you haven’t seen before. Those things haven’t changed. What’s changed is that AI gives you a live window into all three at once.
The candidates who will matter over the next five years (and likely longer) aren’t the ones who can recall an algorithm. They’re the ones who are humble enough to use the tools available, hungry enough to use them well, and smart enough to know the difference between AI output and a good answer.
Stop banning the tool. Start watching how they use it.


