A palpable tension rises as the inventor uncovers the machine. It’s 1770, and Wolfgang von Kempelen is about to demonstrate his latest invention to the Habsburg court. In the middle of the room is a box the size of a table, and on top of that, the torso of a puppet in an Ottoman robe. The Mechanical Turk is touted as the latest automaton, a machine that can play chess.

 Most of us know this story and have been taught it was a hoax. The Mechanical Turk, as an automaton, was only good at one thing: hiding the human chess player inside.

 What always stood out to me about this story isn’t the deception. It’s the gullibility of the 18th-century crowd. They saw a man-sized box playing chess and could be convinced that there wasn’t a man inside.

 The truth is that these onlookers weren’t dumb. They started out sceptical. The Turk, like many magic tricks, had plenty of drawers and doors that could be inspected to persuade the audience it was empty. They were blindsided because they demanded magic. They wanted to believe. Most of them didn’t know or care about the state of the art of automation. They wanted to see man vs machine. So, most quickly let go of their scepticism.

 There is a parallel with today’s generative AI. There isn’t a man who replies to your ChatGPT messages, but there is an audience looking at a machine and blindly demanding magic.

 It’s our job as software specialists to balance that out.

People want magic

People ask Claude medical questions. They treat it like a therapist. Most believe the LLM knows things. It doesn’t. Many people think the machine understands concepts. It can’t. We see founders use ChatGPT as an oracle, asking it to invent entire, detailed features and product requirements documents (PRDs). That can only happen because they actively suspend disbelief and let magic enter their lives.

There is, of course, a danger in that. We need a critical eye on this new, powerful technology. We can’t just tab-tab-tab our way to quality software. We need to actively steer, review, and shape the product rather than handing that off to the machine. Review the PRDs you generate, and check the architecture that rolls out of your coding agent.

We need to inject enthusiasm and scepticism simultaneously.

Intelligence under threat

Software engineers are often dismissive or afraid of coding agents. Guess what? So were the chess players who lost to the Turk. Players were furious when a machine beat them. They blamed bad lighting conditions and witchcraft. Chess prowess in the 1700s was seen as a sign of intelligence. If a machine could beat you, that was humiliating. If you measure your intellectual value by your ability to type syntax, watching Claude Code must hurt.

But chess today is more popular than it was in the 18th century. More players are playing more games. While software development isn’t a game, there are good reasons to assume we will be doing more of it in the future. Not necessarily coding, but designing and testing of ever more complicated solutions.

Our job is to show reluctant teams how to leverage and accept these new power tools.

Anthropomorphism

The puppet moved the pieces. But its real raison d'être was to humanise the machine. To an 18th century chess player, it felt like playing against a metal opponent rather than losing to a table. That’s crucial in fostering the suspension of disbelief. Claude will often act human. “Let me look that up.... Holy shit! You are right!” It’s why OpenClaw’s setup script will ask you to give it a name. It’s why WALL-E has eyes. Anthropomorphism is a drug that makes us trust you.

Named AI Agents are a marketing gimmick that, unfortunately, works. People won't buy an ABKM7 v1.2. They will buy Jenny, the AI shopping assistant. It’s our job to see through this. Jenny is a non-deterministic Python script leaning on a stochastic parrot. Claude is not a senior engineer. We need to figure out how to use them effectively and prevent them from making the wrong choices.

We need to help our customers separate power from hype.

Hype over expertise

Not everyone was equally sold on the Turk in 1770. Experts in both chess and automation suspected von Kempelen was a fraud. They felt there had to be a human player. But the crowd? They loved the hype. Sceptics failed to convince those who had just learned it played against Napoleon Bonaparte. Throughout its entire life, the Turk was surrounded by an aura of fraud and mystery. We see a very similar aura around AI models and companies. Devin was going to get all the engineers fired. GPT5.3 is rumoured to have helped design itself. AGI is always around the corner.

We need to separate hype from opportunity to better serve our customers. We can’t sit on the sidelines as this technological typhoon rages by. But we can’t just install OpenClaw on our customers’ servers because it’s the new thing.

We need to bring expertise and restraint.

In 1854, the Turk was burned down in a museum fire. By then, it was generally accepted as a hoax and a curiosity. One hundred years later, IBM released the Bernstein chess program, the first modern chess computer.

That contains a lesson for both the sceptics and the hypemen. No, ChatGPT doesn’t know what the capital of Greenland is. It is trained so that the next most likely token is “Nuuk”. Advanced models have been trained to look it up on Wikipedia. But it doesn't know the answer. It doesn't understand the concept of a capital. Or Greenland. Or a question.

But there is a real chance that it will, one hundred years from now. When a human and their intelligent machine look at how we used OpenClaw and wonder how we were fooled so easily.