Last week, we mass-deleted 16,000 lines of code and mass-added 11,500 lines to replace them. In the process, we migrated our htaccess tester from SlimPHP to Laravel, ripped out React in favour of Livewire, and swapped custom SASS for Tailwind CSS. The whole thing took less than 6 working hours.

The Setup

The htaccess tester is a side project we've maintained for years. It lets you test Apache rewrite rules without deploying them. The stack had grown stale: SlimPHP 4, React with a custom Webpack setup, hand-rolled SASS. Nothing wrong with it, the code worked fine, and users were happy. But maintaining two separate codebases (PHP API + React SPA) for what's essentially a single-page tool felt like driving two cars to the same destination.

We wanted to modernise to Laravel, Livewire, and Tailwind, a unified PHP stack where everything lives together. Much closer to what we're maintaining in other internal projects. Traditionally, this would be a "someday" project, the kind that sits in your backlog for months because who has two weeks to rewrite a working application?

Turns out, with AI, you don't need two weeks.

Step 1: The Backend Migration

We started with a simple prompt asking Claude to migrate from Slim PHP to Laravel. The first commit landed at 10:38 AM: 6,000+ lines changed, Laravel scaffolding in place, controllers converted, routes migrated.

Was it perfect? Absolutely not. We found ourselves committing something titled "Fix AI f*ckup" - a one-line change to a test configuration that Claude had botched. That commit title isn't frustration; it's documentation. (Okay, maybe a little frustration.) When you're moving this fast, you need to know which changes came from AI and which were human corrections.

We settled into a rhythm: AI generates, we review, we fix the subtle stuff. Server variable filtering that needed adjustment. A compatibility shim because Laravel's request classes work slightly differently than Slim's. Error handling that needed to match our existing patterns.

By lunchtime, the backend migration was done. A few hours of work, not the days we would have spent doing it manually.

The key insight: existing tests became our safety net. Every time Claude made a change, PHPUnit and our Cypress E2E tests told us if something broke. When tests failed, we'd let Claude read the GitHub Actions output and ask it to fix its own mess. Usually, it could.

Step 2: Goodbye React

With the backend stable, we turned to the frontend. "Replace React with Livewire components," we asked. Out went 10,000 lines of JavaScript, node_modules, Webpack config, and TypeScript definitions. In came two Livewire components and some Blade templates. The node_modules folder alone was probably celebrating its deletion.

Here's where we learned something important: AI models are trained on historical data, and they default to "safe" older versions.

When we asked for Livewire, Claude initially reached for Livewire 3 patterns. We had to explicitly push for Livewire 4. Same story later with Tailwind CSS 4 and its new Vite plugin approach. The AI knows the old way really well because that's what's documented in thousands of tutorials and Stack Overflow answers. For the cutting-edge stuff, you need to steer.

Our final stack landed on PHP 8.5, Laravel 12, Livewire 4, and Tailwind 4. Bleeding edge across the board, but only because we insisted.

The afternoon also brought a cascade of CI fixes. GitHub Actions didn't care that the code worked locally. Cypress config needed adjustment; the APP_URL was wrong for the test environment. At one point, Laravel refused to boot because Claude forgot to generate an application key, a classic Laravel gotcha that any developer who's touched the framework knows by heart. Claude had mass-read Laravel documentation, but apparently skimmed past the "getting started" section. Each failure became a prompt: "CI is failing with this error, fix it." Claude would propose a solution, we'd commit it, and we'd see if the next error surfaced. In no time, everything was green.

Step 3: The Final Polish

The last piece was styling. Our custom SASS file had grown to over 1,000 lines of carefully crafted CSS. Converting that to Tailwind utility classes is exactly the kind of mechanical, tedious work that AI excels at.

One prompt, one massive commit: every SCSS rule translated to Tailwind classes in Blade templates. Component classes extracted for buttons, form inputs, and result states. The SASS file was deleted entirely. What would have been a full day of tedious find-and-replace work was done in under an hour.

Then came an interesting human intervention. Our Cypress tests had been selecting elements by CSS class names, classes that no longer existed in Tailwind's utility-class world. We could have asked Claude to update the selectors, but this was actually a code smell. The right fix was adding `data-cy` attributes for test selectors, a best practice we should have followed all along.

Sometimes the AI migration reveals technical debt you'd been ignoring.

The Deployment Surprise

Tests passing locally and in CI are one thing. Production, as always, had other plans.

When we deployed, the site returned a blank page. The problem? Our React setup had served files from `/build`, but Laravel expects `/public`. The nginx config was still pointing to the old path. A quick revert to Forge's default configuration fixed that.

Then came the 500 errors. Our deployment script ran `npm run build` before `composer install`, which worked fine when React was a standalone SPA, but now Vite needed Laravel's PHP files to exist first. Claude suggested reordering the deployment script: composer first, npm second. Deploy again. Site up.

These are the moments where you need to guide AI a bit more. It doesn't know your server setup, your deployment pipeline, or your nginx config. But describe the symptoms, paste the error, let it ask clarifying questions, and you'll debug it together faster than either of you would alone. Claude suggested the deployment script reorder within seconds of seeing the 500 error. We just had to be its eyes and hands on the server.

The Technique That Made It Work

Looking back, a few things made this possible:

Atomic commits with clear attribution. Every AI-generated commit got an `[AI]` prefix. When something broke, we could instantly see whether to blame Claude or ourselves. More importantly, small commits meant easy rollbacks.

Tests as a feedback loop. We didn't trust Claude's code. We trusted the test suite. As long as the tests passed, including end-to-end tests that actually clicked through the UI - we could be confident the migration wasn't breaking user-facing functionality.

Human judgment for architecture. Claude can translate code between frameworks, but it doesn't understand why you made certain decisions. We had to manually add comments explaining our non-default Laravel patterns. We had to decide which dependencies to keep and which to drop. The AI moved fast; we provided direction.

Explicit version requirements. "Use Livewire 4" gets you Livewire 4. Just "add Livewire" gets you whatever version has the most training data - probably not what you want for a fresh project in 2026.

The Numbers

- Timeline: less than 5 hours of logged work, spread over a few days
- Commits: 47- Files changed: 127
- Lines added: 11,513
- Lines removed: 16,204
- "Fix AI f*ckup" commits: 1 (that we labelled honestly)

What This Means

We're not suggesting everyone should rewrite their stack this week. Our htaccess tester is a small, well-tested application. The migration worked because we had comprehensive test coverage, because the target architecture was straightforward, and because we could review every change Claude made.

But the economics of technical modernisation just changed. That "someday" refactoring project? The legacy code migration you've been dreading? The framework upgrade that never makes it to the sprint? These are now weekend projects instead of quarter-long initiatives.

The catch is that you need to actually understand what the AI is doing. When Claude generated Laravel controllers, we could review them because we know Laravel. When it messed up, we could fix it because we understood both the old code and the new. AI amplifies your abilities; it doesn't replace them.

Some hours, one developer, a complete stack migration. Not because AI wrote perfect code, but because it wrote reviewable code, fast enough that we could iterate through mistakes at superhuman speed.

The future of development isn't AI replacing programmers. It's programmers who know how to leverage AI, outpacing those who don't. 

Now, if you'll excuse us, we have a backlog of "someday" projects that suddenly look very achievable.