When I started programming, GitHub did not exist yet. If you needed to solve a problem, you would end up on forums, blogs, or obscure mailing lists. You would copy a snippet of code, paste it into your codebase, tweak it a bit, and hope it worked. Most of the time, it did, to some extent. Sometimes it didn’t. But that was the state of things.

Then, open-source and package managers really took off. Suddenly, instead of copying and pasting random code into our projects, we began to depend on shared libraries. If you needed to validate an email address, you didn’t write your own implementation anymore. You searched for a well-maintained open-source package, one that had a community behind it, one that had seen real-world usage, bug reports, fixes, and edge cases you would never have thought of yourself. Over time, we deliberately moved a significant amount of logic out of our own codebases because we trusted the open-source community more than our own ad hoc solutions.

Fast forward to today, and something strange is happening again. If an engineer needs to validate an email address, they no longer often look for an existing package. They ask an LLM to implement it. The model happily writes the code, which ultimately resides within the codebase. Functionally, we are back to copying and pasting snippets from forums, except now the forum is an AI chat window.

Yes, LLMs are trained on open source. That is obvious. However, what they provide is the average of all solutions they have encountered, not the optimal solution for your specific use case. They do not carry the intent of the original authors, they lack maintainers, and they do not have a community that continually discovers new edge cases over time. The result often looks correct, but correctness is not binary.

By doing this, we are duplicating code across multiple locations. Every team, every codebase, every AI-assisted workflow ends up with its own slightly different version of the same logic. That results in more code to maintain, a larger surface area for bugs, increased security risks, and additional technical debt. And unlike a shared library, there is no clear upgrade path when something turns out to be wrong.

What worries me most is that we are slowly eroding the open source community in this manner. If fewer people depend on shared libraries, fewer people contribute back. Maintainers burn out faster. Creativity disappears because instead of building better abstractions together, we all generate isolated implementations in private codebases. The feedback loop that made open source strong in the first place starts to break down.

Ironically, this feels like a step backwards. We once learned that trusting well-maintained open-source solutions was better than rolling our own. With AI, we are reversing that lesson. We are trusting generated code more than the collective experience of a community. That should make us uneasy.

AI is a powerful tool, but if we use it as a replacement for open source rather than as a complement to it, we risk losing something important. Not just shared code, but shared craftsmanship, shared responsibility, and shared creativity.

Watch & listen to our bi-weekly SaaS show