I've spent nearly two decades watching the same pattern repeat. An engineering team ships a feature three months late. The CFO asks why burn rate is up but velocity is down. The CTO struggles to explain why the team needs to spend a quarter paying down technical debt instead of building new features. Nobody's speaking the same language. So nobody makes good decisions.

Congratulations, you’ve just read the first paragraph of the book Andreas is currently working on.

Let's break it down a little more.

After working with SaaS companies for nearly twenty years, a pattern becomes impossible to ignore: when things go wrong, the post-mortem rarely starts with engineering decisions. It starts with outcomes, missed targets or slower growth. Or think about burn that does not translate into progress and valuations that do not hold up under scrutiny.

Yet the root causes often live elsewhere. In bugs that quietly erode customer trust or bad hires that drain far more than their salary. Or it can lie in technical debt that compounds like interest or in teams that grow in headcount but lose effective capacity.

These are not engineering problems. They are economic ones.

That insight sits at the heart of The Economics of Software Engineering, the book Andreas Creten, our CEO, is currently writing. It is his second book, following Free Range Management, and it is aimed at a gap many investors feel but struggle to name.

Most boards and investment committees talk fluently about ARR, CAC, runway, and capital efficiency. Far fewer conversations connect those numbers to what actually happens inside engineering teams. Decisions about hiring, rewrites, outsourcing, or “just shipping faster” are often made with incomplete visibility into their real cost.

Here's a concrete example from the book:
Hiring more engineers only increases capacity when coordination costs do not outweigh the gains. Once coordination grows faster than output, adding headcount can reduce total throughput while burn continues to rise. On paper, the team doubled. In reality, effective capacity shrank.

Another:
Technical debt is rarely a future problem. It shows up today as lost velocity, slower incident response, delayed features, and ultimately missed revenue. When quantified, it often turns out that 30–50% of an engineering budget is spent just maintaining the ability to function, not creating new value.

For investors like you, these are not abstract insights. They explain why two companies with similar revenue and team size can diverge so sharply in execution, predictability, and long-term value. They also explain why some portfolios absorb shocks gracefully, while others spiral when markets tighten.

The book doesn't advocate management by numbers but it focuses on decision quality and understanding trade-offs before costs compound. We are sharing this now because the manuscript is nearing completion, and we are opening a notification list for the book.

You’ve already read the first sentence. If you want to read the rest, leave us your email.


Watch our bi-weekly SaaS show & other videos

Welcome to our next private dinners

Brussels

Feel free to extend this invitation for our next private CxO dinner in Brussels on February 26th.

Leadership dinner in Brussels · Luma
We love bringing people from the world of SaaS, AI and tech together for relaxed conversations - and great food. We have a long tradition of organising private…

Zürich

Want to meet up with us in Zurich, cool! We are organising a private dinner for investors and founders in the City of Banks, feel free to join.

Leadership dinner in Zurich · Luma
We’re always eager to connect with inspiring tech leaders like you. Over the past few years, we’ve hosted a series of exclusive CxO dinners in Belgium and are…

A word from Andreas, CEO of madewithlove

How AI is quietly killing open source

When I started programming, GitHub did not exist yet. If you needed to solve a problem, you would end up on forums, blogs, or obscure mailing lists. You would copy a snippet of code, paste it into your codebase, tweak it a bit, and hope it worked. Most of the time, it did, to some extent. Sometimes it didn’t. But that was the state of things.

Then, open-source and package managers really took off. Suddenly, instead of copying and pasting random code into our projects, we began to depend on shared libraries. If you needed to validate an email address, you didn’t write your own implementation anymore. You searched for a well-maintained open-source package, one that had a community behind it, one that had seen real-world usage, bug reports, fixes, and edge cases you would never have thought of yourself. Over time, we deliberately moved a significant amount of logic out of our own codebases because we trusted the open-source community more than our own ad hoc solutions.

Fast forward to today, and something strange is happening again. If an engineer needs to validate an email address, they no longer often look for an existing package. They ask an LLM to implement it. The model happily writes the code, which ultimately resides within the codebase. Functionally, we are back to copying and pasting snippets from forums, except now the forum is an AI chat window.

Yes, LLMs are trained on open source. That is obvious. However, what they provide is the average of all solutions they have encountered, not the optimal solution for your specific use case. They do not carry the intent of the original authors, they lack maintainers, and they do not have a community that continually discovers new edge cases over time. The result often looks correct, but correctness is not binary.

By doing this, we are duplicating code across multiple locations. Every team, every codebase, every AI-assisted workflow ends up with its own slightly different version of the same logic. That results in more code to maintain, a larger surface area for bugs, increased security risks, and additional technical debt. And unlike a shared library, there is no clear upgrade path when something turns out to be wrong.

What worries me most is that we are slowly eroding the open source community in this manner. If fewer people depend on shared libraries, fewer people contribute back. Maintainers burn out faster. Creativity disappears because instead of building better abstractions together, we all generate isolated implementations in private codebases. The feedback loop that made open source strong in the first place starts to break down.

Ironically, this feels like a step backwards. We once learned that trusting well-maintained open-source solutions was better than rolling our own. With AI, we are reversing that lesson. We are trusting generated code more than the collective experience of a community. That should make us uneasy.

AI is a powerful tool, but if we use it as a replacement for open source rather than as a complement to it, we risk losing something important. Not just shared code, but shared craftsmanship, shared responsibility, and shared creativity.