Developer experience

12 posts
From opt in to default

From opt in to default

Developers don't skip standards because they're careless, they skip them because there are fifteen things to remember and the code was the hard part. The real question isn't which tasks your LLM handles well. It's what's still slipping through ungated.

Three Claudes walk into a codebase

Three Claudes walk into a codebase

The machines aren't replacing developers, they're promoting them. You're no longer just writing code; you're managing agents, reviewing output, and setting standards. Three Claudes walk into a codebase, and suddenly you're a manager.

Running multiple Claude accounts without logging out

Running multiple Claude accounts without logging out

Managing multiple Claude Code accounts across machines gets messy fast. Jean-Claude keeps the useful parts in sync, separates account-specific config, and makes switching between personal, team, and client setups far less painful.

Conductor: running multiple AI coding agents in parallel

Conductor: running multiple AI coding agents in parallel

Conductor by Melty Labs makes parallel agent workflows practical by running multiple agents with separate tasks simultaneously. The trade-offs are real but manageable, and this is where development is heading.

I'm using my engineering colleagues as my personal agents

I'm using my engineering colleagues as my personal agents

A couple of months ago, I was copy-pasting prompts into ChatGPT. Now I'm shipping features, running tests, managing branches, and keeping documentation alive, with a team of agents doing the heavy lifting. All by myself.

Onboard the AI like you'd onboard a developer

Onboard the AI like you'd onboard a developer

Legacy codebases are messy, undocumented, and full of decisions nobody remembers making. But if you can explain it to a new developer, you can onboard an AI and that changes everything.

QA is the last bottleneck

QA is the last bottleneck

Software development's feedback loop has compressed from years to minutes, but QA remains the last bottleneck, the one place still dependent on human judgment. AI is rapidly closing that gap, and before the year is out, that final human checkpoint may no longer be necessary.

Stop obsessing over the perfect prompt

Stop obsessing over the perfect prompt

LLMs are built for conversation, not incantations. The value isn't in your opening message, it's in the back-and-forth: clarifying, correcting, refining. Iteration is cheap. The conversation is the work.

Out with multitasking, in with orchestrating

Out with multitasking, in with orchestrating

The return of multitasking, but not as we knew it. Running multiple Claude Code instances simultaneously isn't the context-switching productivity killer we've been warned about for years; it's orchestration

You’ve successfully subscribed to madewithlove
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Success! Your email is updated.
Your link has expired
Success! Check your email for magic link to sign-in.