In the past year, we’ve seen an interesting divide in the developer community. Some developers have gone all in on using AI as part of their daily workflow. Others are much slower to adopt it. While that difference is normal with any new technology, it is becoming clear that those who do not embrace AI risk falling behind.

This shift recently made us rethink how we hire. A question we asked ourselves was, “Should candidates be allowed to use AI during a technical interview?” A year ago, the answer would have been "no". Today, we might not even consider hiring someone who does not use AI. That does not mean the fundamentals no longer matter. Quite the opposite. AI is not a replacement for deep technical knowledge, it is an amplifier. The way a developer interacts with an LLM says a lot. How they steer the model, how they write prompts, and how well they validate the output are strong signals of their engineering skill.

This is why our technical tests are open-ended. There is not just one correct solution. Before anyone asks an LLM for help, we ask them to talk us through the possible solutions. If they cannot do that without AI, that is a red flag. Once that is clear, using AI to speed up the work is not just allowed, it is expected.

We follow the same pattern in our own work. Our team uses LLMs to speed up code refactors, improve test coverage, and reduce boilerplate. But it only works if you have strong guardrails. You need to understand what the model is doing, check the output carefully, and know when the answer is wrong. Without that, AI becomes a risk rather than an advantage. The takeaway is simple. AI does not replace developers. It replaces developers who do not use AI. But only if the person using it knows what they are doing.