AI did not kill engineering teams. It made bad decisions cheaper to produce.

There is a familiar story resurfacing in software again.

That AI is finally going to make things simple.

Smaller teams. Faster output. Less need for senior engineers. Just prompts, results, and momentum. It is an appealing narrative, especially for founders and investors who have spent years watching software teams grow while velocity stubbornly refuses to keep up.

As we all are very much aware of, something real is happening. AI genuinely changes leverage. We see smaller teams shipping things that would have required far more people not that long ago. Boilerplate disappears. Prototypes appear quickly. Internal tools suddenly become viable. The cost of experimentation drops, and that is a good thing.

The problem starts when leverage is mistaken for simplicity. Because software did not suddenly become easy. The complexity did not vanish. It just moved.

Complexity always finds a place to hide

When AI generates code, it also generates decisions on architecture, security or performance. It makes choices and trade offs around data, state, and failure modes. The difference is that they are now easier to miss.

This is something we have written about before, often in the context of quality. Quality is not about perfection or elegance. It is about fitness for purpose. AI does not change that definition, but it raises the stakes.

Teams that rely heavily on generated output without understanding what it does tend to accumulate invisible debt. Things work and demos look convincing. Velocity graphs go up. And then, slowly, the system becomes harder to reason about. Debugging feels like archaeology. Onboarding new people takes longer than expected, making changes starts to feel risky.

None of that is caused by AI itself, but by a lack of ownership.

Smaller teams need stronger judgement

There is an uncomfortable irony in the promise of smaller AI powered teams. Yes, you can reduce headcount. That part is real.

But the cognitive load on the people who remain increases significantly.

Someone still needs to understand the system well enough to question what is generated instead of accepting it blindly. Someone needs to know when speed is acceptable and when it is dangerous. Someone needs to be able to explain the system six months later, when the original prompts are gone and the context has faded.

This is not a new problem. It is the same one we see with scaling teams in general. Headcount was never the solution. Judgement was.

AI does not remove the need for seniority, but it makes the absence of it more expensive.

Vibe coding feels productive until responsibility shows up

The appeal of vibe coding is easy to understand. It feels creative and fast. It removes friction and gives immediate results. But vibe coding is more a phase than a strategy.

What it really does is postpone responsibility. Maintenance later. Structure later. Observability later. Hiring later. Explaining later. Those decisions still need to be made, but they are now being pushed forward in time. And later usually arrives at the worst possible moment. When customers depend on the system. When the team is already stretched. When change needs to happen quickly.

We see this often during audits. Founders intended to clean things up once the product proved itself. Instead, the system grew more fragile as the team grew around it. At that point, even small changes feel risky.

AI shortens the distance between decision and consequence. That can be helpful. It can also amplify mistakes faster than teams expect.

What actually scales in practice

Teams that use AI successfully tend to behave in very recognisable ways.

They stay opinionated about architecture, even when code is generated. They document intent, not just output. They treat AI as a collaborator, not an authority. They invest early in monitoring and observability, because faster systems also fail faster.

Most importantly, they actively reduce dependency on individuals and tools, like good leadership that makes itself less necessary over time. It creates systems and teams that continue to function when individuals step back.

AI does not change that responsibility but it sharpens it.

The conclusion most people avoid

AI will not eliminate engineering teams, but it will expose weak ones faster.

Smaller teams are absolutely possible. But only when those teams are capable of making good decisions consistently, understanding the systems they run, and changing them safely when the context shifts.

The companies that win will not be the ones that ship the fastest this quarter. They will be the ones that can still change direction a year from now without fear. That has always been the real work.

AI just makes it harder to ignore.