A letter to my son: growing up in the AI world ends on a note of uncertainty. I'm curious to see what path you chose in this new world. The points touched on here are conversations I hear every day. They reveal the same anxiety: craft dying, margins squeezed, quality eroding, the industry racing to the bottom.
I'm not uncertain. Not because I know exactly how this plays out, but because every concern raised points to the same conclusion: engineering becomes more valuable when AI handles commodity work, not less.
The problem is we've been selling the wrong thing for 40 years.
When you hired a developer, you never actually wanted someone to translate requirements into syntax. You wanted someone who understood what you were trying to accomplish, knew how to build it sustainably, could spot problems before they became expensive, made judgment calls about tradeoffs, and understood the business context behind the technical requirements.
Syntax generation was just an inconvenient requirement to get those things. It was bundled with the valuable part because there was no other way to deliver engineering expertise except through the act of writing code.
AI unbundles them. Now you can get syntax generation separately. And suddenly it becomes obvious what the valuable part always was: the thinking, not the typing.
This is exactly what happened when calculators unbundled arithmetic from mathematics. Mathematicians didn't become less valuable because addition got faster. They became more valuable because everyone could finally see what mathematics actually was. The same thing happened when spreadsheets unbundled calculation from financial analysis. Financial analysts didn't become obsolete. The ones who understood what to calculate and why became more valuable. The ones who were just fast at arithmetic found other work.
We're at that same inflexion point.
Consider what happens when non-technical founders vibecode MVPs over weekends. Most assume: if everyone can generate code, experienced engineers matter less. The reality: the gap between "working" and "production-ready" becomes obvious faster. Those weekend MVPs will break in production when real load hits them. They'll waste months building the wrong architecture. Both need someone who knows how to build systems that survive contact with real users, real scale, and real business requirements.
That was always our value. It's just more visible now because it's no longer bundled with syntax generation.
The brownfield positioning is right for the wrong reason. More companies will hit technical debt crises as they vibecode faster. True. But that's not why brownfield becomes more valuable. Brownfield becomes more valuable because it's where AI fails.
AI can refactor 1000 routes. It can add test coverage. It can fix isolated bugs. It can generate documentation. It can even suggest architectural improvements. What it cannot do: understand the implicit business rules embedded in five years of patches and hotfixes, spot the architectural patterns that emerged organically and shouldn't be disrupted, navigate the political landscape of what can and can't be changed, distinguish technical debt from load-bearing complexity, or know which "bad" code is handling critical edge cases.
That requires experienced human expertise. Specifically, experienced human expertise who has seen what happens when you "clean up" code that turns out to be preventing a critical failure.
This is where experience compounds. Where clients can't just hire cheaper developers with better AI tools. Where expertise matters more every year, not less. That includes legacy codebases, but also security-critical systems where "works on my machine" isn't good enough, highly regulated industries where compliance isn't optional, domain-specific architectures where context matters more than patterns, and systems at scale where performance characteristics trump code elegance.
Quality becomes the differentiator in a world of cheaply generated AI code. Who ensures AI-generated code is production-ready? Who spots the security vulnerabilities, the performance bottlenecks, the architectural mistakes that AI makes because it's trained on average code, not excellent code?
The agencies that compete on cheap code output can't. They're optimising for volume and speed, not sustainability and quality. We can. We're already doing it.
Wrapping up here, part two will follow shortly.
Member discussion