This isn’t an article about AI. Or it is.
Anyway, away from the debate, AI or not AI, it is about what customer support has become in many digital businesses: a well-instrumented workflow that optimises for deflection, not resolution. AI just made the pattern more visible.
The now-standard flow looks like this: you enter via a bot, you provide a careful description, you get a generic response (sometimes even outdated), then you ask for a human and re-explain everything. In my recent case with LinkedIn, the AI agent referenced information “as of Jan 1st, 2026,” even though I contacted them on March 3rd, 2026. That’s a small detail that signals a bigger issue: the first-line system is not connected to up-to-date policy/config, or it’s running on stale retrieval, and no one is validating it.
Then comes the human handoff, but it’s rarely a handoff. It’s a reset. The human agent often can’t see what the bot saw, can’t use what you already wrote, and can’t safely take action even when the issue is clear. Sometimes you get transferred again and end up doing the same loop a third time. Eventually, you receive a polite version of: “We understand, but there’s nothing more we can do beyond what the app already allows.”
From a technology perspective, this is less about “bad agents” and more about architecture and operating model:
- Context isn’t preserved across tiers (weak case state, poor summarisation, no shared timeline of evidence)
- The support surface isn’t connected to real control planes (limited permissions; no scoped admin actions; no auditable break-glass path)
- Escalation paths exist on paper, but not in tooling (no reliable route to engineering/operations, or it’s too expensive to use)
- AI is deployed as a gatekeeper (reduce contact rate), not as an accelerator (reduce time-to-resolution)
I saw the same shape of experience with Revolut as well: different domain, similar mechanics. The customer (me) ends up doing the integration work, repeating the narrative, providing the timestamps, re-uploading the proof, while the system mostly routes and responds.
With leading technology, it’s worth stepping back and asking what we’re optimising for. When the main metric is deflection (fewer tickets, shorter chats), the experience degrades even if the tooling looks modern.
An outcome-first posture is simpler: reduce repetition, improve time-to-resolution, and make escalation real. Put AI where it removes friction (intake, summarisation across handoffs, answers grounded in current policy/state), and give customer support admins power again: scoped permissions, safe runbooks, and auditable “do something” controls, so a human can actually fix what the system can’t.
Follow our bi-weekly SaaS show
Fast, honest insights from the trenches of SaaS. Andreas and Sjimi, partners at madewithlove, share what they’re seeing inside real SaaS teams and products every two weeks.
Member discussion