When ChatGPT takes the test
Let’s play a game: imagine a candidate sends in a coding test, and it’s flawless. Concise, elegant, readable. The logic is tight, the naming pristine. You glance at it, eyebrows raised, heart fluttering. Then the thought hits you (or your technical recruiter): did they write this? Or did ChatGPT do it?
Now what?
AI is no longer just something your team experiments with during hackathons. Your next hire will use AI. It’s now casually passing your recruitment tests, solving take-homes, and helping candidates generate near-perfect pull requests. Some panic. Others adapt. You get to choose.
The cheating reflex
When AI-generated code shows up in technical tests, we’ve seen teams respond with suspicion. And fair enough. These processes were built to evaluate personal problem-solving, not to vet co-authored work with a machine.
But here’s a better question: if Cursor or an Claude Agent can solve your technical test, what exactly are you evaluating?
Agencies and internal teams often scramble to "AI-proof" their hiring pipelines. They block tools, limit access, or timebox assignments. It’s an arms race with no winner. We’ve even seen recruiters ask for "proof the candidate didn’t use AI" which is about as effective as asking someone to prove they didn’t use Google in 2007.
Lately, there’s been a shift towards pairing sessions and live coding assessments as an alternative, driven by concerns over take-home tests being too easy to game with AI. It’s a reactionary move, and while more interactive, it brings its own set of challenges around bias, pressure, and accessibility.
It’s time to change the game, not the rules.
The interview starts with a question
At madewithlove, we don’t hand out cookie-cutter take-home assignments. Instead, we ask a simple question, for example: How would you like to demonstrate your technical ability for this role?
Candidates respond with personal projects, detailed walkthroughs, system diagrams, and yes, sometimes code. One shared a full breakdown of how they refactored a legacy architecture to scale under pressure. Another walked us through their Git history like it was a travelogue.
You learn more in those conversations than in any syntax-locked coding quiz. And crucially, it becomes immediately clear who owns the work and who just generated it.
That only works if the interviewer knows what they’re looking at. Technical conversations require technical fluency. If you're relying on general recruiters to assess engineering talent, they’ll need to level up fast, or risk mistaking confidence for competence.
Juniors need a different lens
Not everyone has a portfolio ready to unpack. Juniors, for instance, benefit more from collaborative sessions. So we pair them up with our engineers.
We code together. We talk. We explore how they reason, how they troubleshoot, and how they ask questions. If they use AI during the session, even better. It shows awareness and initiative. If they don’t, that raises more questions than it answers.
The point isn’t whether they can write production-grade code solo. It’s whether they can grow into someone who does.
Also read: Vibe coding and the junior dilemma
AI is part of the workflow now
There’s no value in pretending this shift hasn’t happened. Prompting, validating, and editing AI-generated code is quickly becoming part of daily engineering practice. The best candidates embrace this.
That’s why we encourage teams to stop hiding from it and start testing for it. Ask candidates what tools they use and how. Ask how they double-check results. Ask where AI helped and where it failed. Their answers reveal far more than the output ever could.
Know what you’re hiring for, and how to assess it
A well-defined job description is no longer optional. It sets the stage for every conversation. Without clarity about the role, your interviews will drift, your tests won’t match, and your decisions will feel arbitrary.
And we’ve said it before, but we’ll say it again: if you want to have deep conversations about technical decisions, you need interviewers who understand the domain. This part often gets overlooked. The best interview process is a dialogue between equals, not a quiz administered by someone reading from a script.
In the end, listen more than you test
Hiring good engineers in an AI-powered world is about giving them room to show how they think, how they communicate, and how they solve real problems.
Let them talk through something they’ve built. Let them use the tools they would normally use. Let them guide you through their reasoning.
Because when you let people show up as themselves, you get a better picture of what they’ll be like in your team.
And that’s where the real hiring happens.
Member discussion