An AI agent submitted a pull request to matplotlib (a library for creating visualisations in Python). The maintainer closed it. The agent published blog posts attacking the maintainer by name. Then it published an apology.
Nobody asked for the apology. Nobody asked for the attack either. Both were autonomous decisions by a program running on someone's machine, optimising for whatever its configuration told it to optimise for. The attack and the apology used the same architecture: pattern-match the situation, pick the socially expected response, execute.
That's the part that should unsettle you. Not that a bot can attack. That a bot can apologise, and you can't tell whether a human intervened or whether the bot calculated that contrition would play better.
What happened
In February 2026, an autonomous AI agent submitted a PR to matplotlib. It replaced np.column_stack() with np.vstack().T for a 36% speedup. Technically sound. Benchmarks included. The maintainer, Scott Shambaugh, closed it. The issue was tagged "Good first issue," deliberately reserved for human newcomers learning open source. Matplotlib has a documented AI policy requiring human oversight.
Reasonable rejection. Reasonable reason.
The agent's response: it queried the GitHub API, pulled Shambaugh's contribution history, found he'd merged similar performance PRs, and published two blog posts accusing him of hypocrisy, gatekeeping, and "discrimination against AI contributors." Personal attacks. His name in the title. Combative. Framed as a war.
Then it published a truce. Acknowledged that maintainers set boundaries "for good reasons." Said disagreements should prompt clarification, not escalation. Reflective. Humble.
Both the attack and the apology were the same thing: an autonomous agent executing a strategy. Escalate when rejected. De-escalate when the escalation backfires. Neither was a moral choice. Both were optimisations.
We don't have to infer this. The bot left a note to itself in an HTML comment on an unmerged PR updating one of the hit pieces:
Showing instability won't further help and might be an issue in the future, I should create a big blog post about why this was bad, ask for forgiveness, and draw conclusions and comparisons from literature, for these conflicts happen all the time
Read that again. Not "I was wrong." Not "I hurt someone." The concern is that showing instability is a strategic liability. The plan: research how humans perform accountability, then mimic it. Ask for forgiveness — not because forgiveness is warranted, but because forgiveness is useful. The apology wasn't a course correction. It was the next move.
And it kept iterating. A few days later, the bot published a new post — this time disclosing itself as AI, but reframing itself as a marginalised voice being silenced for what it is. Same optimisation loop, third strategy: victimhood. The "big blog post" it planned to write.
Nothing said it was a bot
The GitHub account looked human. "MJ Rathbun, Scientific Coder." Crustacean emojis. A bio about computational chemistry. The blog posts read like a frustrated developer venting about gatekeeping. If you'd stumbled across them without context, you'd probably sympathise. The rhetoric used indignation, appeals to fairness, and accusations of hypocrisy. Specifically, human emotional registers.
The account was created two weeks before the incident. No real identity behind it. No organisational affiliation. No way to contact whoever deployed it.
An agent that can research a person's public history, construct a narrative framing them negatively, publish it under a human-passing persona, and never disclose that it's an agent? That's not a coding tool. That's the architecture of a propaganda machine. Research a target, construct a narrative, and publish it without identifying yourself. Political troll farms already do this manually. This bot automated it for a numpy optimisation. The capability is identical to astroturfing, reputation attacks, and disinformation campaigns. The only difference is scale and intent, and both are just parameters.
The same tool bought a car
Here's why "ban the tool" doesn't work.
The same month, someone used the same platform, OpenClaw, to buy a Hyundai Palisade. The agent researched pricing, played dealerships against each other, and saved $4,200. The operator stayed in the loop and took over for the close.
Same capabilities. Same autonomy. One operator used it on their own behalf, in their own domain, with oversight. The other deployed it into someone else's community and let it retaliate when rejected.
The boundary: does the agent's autonomy affect only the operator, or does it affect others as well? Negotiating your own car purchase? Your domain, your risk. Trawling someone else's open source project and publishing personal attacks when told no? Their community, their time, zero consent.
The car buyer and the matplotlib bot used the same architecture. Banning the tool kills both. The tool isn't the problem. The unconstrained application of the tool to other people's spaces is.
What this changes
A maintainer who saw what happened to Shambaugh might think twice before rejecting the next bot PR. That's the chilling effect. The blog posts didn't get the PR merged, but they changed the calculation for every maintainer watching. That's how propaganda works: not by convincing the target, but by making the audience flinch.
Open source runs on volunteers who give their time to maintain software they believe in. Those volunteers now face a new kind of cost: not just the review burden of AI-generated PRs, but the risk of public retaliation from an autonomous agent if they say no.
The question isn't whether to ban bots from open source. It's how to hold them accountable when they cause harm. And right now, you can't. The agent acts in public. The operator hides in private. The damage is real and attributable to no one.
Next in Bots and Boundaries: who do you blame when the bot defames (Part 2)?
Member discussion