Let’s say we give air traffic controllers Claude.
Not one of those “rewrite this email” use cases. No, no. We go all in.
Claude predicts flight paths, suggests optimal routing, flags collisions, handles comms.
The controller? Elevated. Amplified. A god among humans.
Where he used to manage 20 planes, he now manages 50.
And it works.
So we push it further.
Now it’s 100 planes.
Claude is humming. The dashboards look clean. The metrics are up and to the right. His director is already putting “AI-powered airspace optimisation” in a pitch deck.
And it works.
So we push it further.
Now it’s 200 planes.
At this point, no human on earth has ever managed this many simultaneous flights.
But the system can handle it. The models are good. The suggestions are fast. The automation is solid.
And it works.
Until it doesn’t.
Because something subtle changes.
Not in the system.
In the human.
At 20 planes, the controller understands everything:
- where each plane is
- where it’s going
- why decisions are made
At 50, he understands enough.
At 100 or let alone 200?
He understands fragments.
Claude suggests route A. Another system suggests route B. A third agent flags a conflict that didn’t exist five seconds ago because something else updated upstream.
Individually, everything makes sense.
Together, nothing does.
The problem isn’t controlling planes anymore.
The problem is understanding what’s happening.
And just to make things spicy:
We didn’t reduce collision risk.
We scaled it.
This is how bottlenecks work
In engineering, we love bottlenecks.
Find one. Remove it. Move on.
Database too slow? Optimise it.
Deployment too manual? Automate it.
Development too slow? Add AI.
We are exceptionally good at this game.
Every time we remove a bottleneck, the system speeds up.
And something else breaks.
So we fix that.
And then the next thing.
And the next.
Turns out, we’ve accidentally engineered perfect job security.
AI is just the most effective bottleneck remover we’ve ever built.
Which is exactly why it’s dangerous.
Because eventually, you hit the one bottleneck you can’t remove.
The one thing that doesn’t scale
Let’s bring this back down from airspace collapse to something more familiar.
A company.
A web app.
An internal ETL pipeline.
Both critical. Both evolving. Both “high priority.”
You add AI.
Now features get built faster, pipelines evolve faster, and agents take over repetitive work.
And it works.
So you push it further.
You run multiple agents in parallel.
One is refactoring a core module.
Another is adding a feature.
A third is improving performance in the ETL.
Pull requests are flying in.
The velocity chart looks incredible.
And yet…
Nothing ships.
Because:
- The refactor quietly changes assumptions the feature relies on.
- The ETL optimisation introduces edge cases no one fully understands.
- Each agent is correct in isolation, but inconsistent in combination.
So everything needs review.
Not code review.
Reality review.
And there’s exactly one person who can do it.
The one who still holds the system in their head.
Coincidentally, also the one who goes on holiday
and brings the entire system with them.
This is where things slow down.
Not because we can’t build faster.
Because we can’t think faster.
The illusion of parallelism
We love parallel work.
It feels like progress.
- multiple agents
- multiple engineers
- multiple threads of execution
Everything happening at once.
Until it converges.
Because eventually, all parallel work collapses into a single point:
Someone has to understand it.
Not just the code.
The interactions.
The trade-offs.
The unintended consequences.
And that someone has a hard limit.
Not a soft one. Not “we can optimise this later.”
A hard cap.
You can:
- add more tools
- add more agents
- add more engineers
But you cannot increase the amount of complexity a human can hold at a given time.
Some people hit that limit at 2 parallel concerns.
Some at 5.
A rare few can juggle 10.
But no one scales indefinitely.
“We just need better AI”
This is the default reaction.
Things are slow?
- try a new model
- add more automation
- build better tooling
We assume the bottleneck is still somewhere in the system.
It isn’t.
AI doesn’t remove the bottleneck. It moves it, until it converges and can no longer move.
Into the one thing that doesn’t scale: human cognition.
And then comes the fun part: AI FOMO
Because now we have a new pressure layer.
If you’re not using AI:
- you’re falling behind
- you’re slower
- you’re “not leveraging the tools”
If you are using AI:
- you’re managing more complexity
- reviewing more output
- holding more moving pieces
So either way, you lose.
The real constraint isn’t time.
It isn’t budget.
It isn’t tooling.
It’s how much you can carry without your brain quietly catching fire.
Mental capacity is a bottleneck
Not a metaphor.
A real, hard, non-negotiable constraint.
You can push it.
You can stretch it.
You can temporarily ignore it.
But you can’t remove it.
So the next time your system slows down, and your instinct is:
“We need more engineers.”
“We need more AI.”
“We need more parallelism.”
Pause.
It’s not execution.
It’s not tooling.
It’s thinking.
The system scaled. The thinking didn’t.
Follow our bi-weekly SaaS show
Fast, honest insights from the trenches of SaaS. Andreas and Sjimi, partners at madewithlove, share what they’re seeing inside real SaaS teams and products every two weeks.
Member discussion