We have audited over 180 software companies in the past 10 years. Seed rounds, Series A, mergers, internal health checks, garage operations, you name it. Different industries, different countries, different tech stacks, but often with the same README last updated in... 2023.

After a while, the audits start to blur. Not because the companies are the same, but because the shortcuts are. The rusty deployment process, the 2015 Jira backlog, and the CTO who reviews every pull request and manages infrastructure while answering Slack messages that start with "quick question". These turn up with the regularity of that guy playing Wonderwall at a bonfire. You know it's coming, yet you can't stop it. And somewhere around the bonfire, someone thinks it's brilliant. We could start playing mental-audit-bingo in the morning. Most of the cards would be filled by lunchtime.

Two things to say up front. The bingo card is a map, not a scorecard. Every square on it has a rational origin story. A team chose speed over rigour because speed was the right call at the time. The loan made sense when they took it out. It's only years later, when someone opens the books, that the interest becomes visible. And nobody is exempt. We have made most of these calls ourselves on our own projects. The point of writing them down isn't to mock anyone. It's to make the patterns easier to recognise before they compound further.

These findings are also about to shift. AI is rewriting the rules on half of what we flag, making some shortcuts inexcusable and others worse in ways nobody is talking about yet. We'll get to that. But first, the current state of the average SaaS company, as seen from the inside.

Ready to play audit bingo?

1. The documentation

You have documentation. It was written during a motivated sprint in 2022, references three tools you no longer use, and the architecture diagram reflects a system that was decommissioned before anyone on the current team was hired. Knowledge lives in one person's head. That person is senior, irreplaceable, and currently on a flight to Lisbon with their phone on airplane mode. The entire engineering department is, functionally, waiting for someone to clear customs.

90% of the companies we audit have this exact setup. It isn't a vice. It's the outcome of a perfectly reasonable series of decisions: ship the feature, defer the doc update, onboard the next person face-to-face because that's faster right now. Each call is sensible in isolation. The compound interest is tribal knowledge.

This is also the finding most likely to age badly. AI already generates docs from code, maintains READMEs, and writes ADRs. In a year, "we didn't have time to document" will sound the way "we didn't have time to spell-check" sounds now. The excuse is dying. The question is whether it takes the problem with it.

2. The CTO

The CTO built the product, reviews all the code, manages the infrastructure, decides the roadmap, handles production incidents, mentors the juniors, talks to investors, and occasionally sleeps. Everyone calls this a strength. "Our CTO is incredible, they built the whole thing." They really did. And that's the shortcut: one person who knows enough to hold the whole thing together is cheaper and faster than the proper org you'd need to match their throughput.

One CTO in our dataset hadn't taken a day off in over three years. We flagged it as a risk. The company called it commitment. Both readings are correct. Commitment is the fuel. Concentration risk is the bill.

79% of the companies we audit have this single point of failure. AI makes it worse, not better. Give a bottleneck CTO an AI coding assistant and they build ten times more code in half the time. The bus factor is still one. But now the bus is going faster, the codebase is ten times larger, and when that person eventually leaves, the knowledge gap leaves with them.

3. The testing

85% of the companies we audit have no meaningful automated testing. Too small, too fast, we'll add it later. Later is a place where tests live in theory and die in practice.

One company had six unit tests. Total. Across the entire codebase. They were testing a third-party library. An affectionate gesture toward someone else's code, not a safety net for their own.

AI generates test suites in seconds now. The "too busy" defence is collapsing. But we're watching for a new failure mode: the 600-test codebase where every test was AI-generated, every test passes, and none of them catch real bugs. A six-test company at least knows what it's missing.

4. The secrets

40% of the companies we audit have credentials committed to their codebase. API keys, database passwords, payment provider tokens, sitting in plain text. The shortcut is intuitive: hardcode it to unblock the deploy, rotate later, move on. Later doesn't come, because the code works and nobody sees the liability until someone with audit rights shows up.

In one engagement, we logged into a live S3 bucket using credentials hardcoded three years earlier. The fix is a few hours of work. It had been "next sprint" for 36 months. "Next sprint" is doing a lot of load-bearing work in some codebases. It stretches.

AI is genuinely fixing this one. GitHub secret scanning catches committed credentials pre-commit. But AI also cheerfully generates example code with hardcoded API keys. It gives with one hand and types OPENAI_API_KEY=sk-live-... with the other.

5. The backlog

One company had 2,722 items in their backlog. The oldest was almost ten years. "Add dark mode." "Investigate GraphQL migration." "Build feature Dave suggested at the offsite in 2019." Dave has since left. Nobody remembers what the feature was. The ticket lives on.

A backlog like that is an aspirations folder with Jira branding. AI can triage backlogs and auto-close stale items. But the real problem was never sorting. It was the courage to let go, and no tool fixes that.

6. The product manager

There isn't one. 67% of the companies we audit have no real product management function. The CTO doubles as product owner. The CEO prototypes in Figma or Lovable. Prioritisation happens based on whichever customer shouted loudest that week.

52% of companies don't measure whether anyone uses the features they ship. Build, deploy, move on.

This is a finding AI won't touch. The absence of product management is a structural choice, not a capacity problem. You can hand a team a Michelin-star kitchen and they'll still make toast if toast is what they think dinner is.

7. The deployment

52% of the companies we audit deploy manually. Quarterly releases. FTP uploads. One company's developers even had to physically drive to the office to restart the production servers. Their deployment pipeline had a commute.

Another company's disaster recovery plan, documented verbatim, read: "restarting the application solves most issues." The honesty is rare. The plan needs work.

8. The sprint

The standup happens. The board gets updated. Nobody follows up. One company described their sprint process as "a formality". Every team has inherited a process from someone who no longer works there. The rituals survive the owner. That's how you end up with a standup at 9:15 every morning that everyone attends and nobody needs.

We have audited companies that tried Scrum, Kanban, SAFe, something someone read about on a flight, and a system that was essentially a Jira board with opinions. Every one of them had the same feature: a process installed under different conditions, still running as if nothing had changed.

9. The outlier

In all these audits, a handful of companies scored as genuine positive outliers. Cross-functional pods with dedicated PMs, QA engineers, and designers in every team. Five minor concerns total.

They had done the boring work first. While everyone else was hiring senior engineers to patch structural problems, which is the engineering equivalent of buying a faster car to fix a pothole, these outliers built the scaffolding that makes good practices sustainable. "We have a process and we actually follow it" turns out to be the most radical thing a SaaS company can say.

What changes from here

We expect the next 150 audits to look different. AI is removing the excuses that propped up half of these findings. Documentation debt will drop. Secrets in code will decline. Test coverage will rise. Whether the tests are any good is a separate question we're already excited about.

But some findings will get worse. The CTO bottleneck deepens as AI-assisted leaders build more, faster, alone. AI-generated codebases that nobody on the team understands will become a new category of technical debt. We're already seeing "AI provider dependency" in early audits: entire products built on a single LLM vendor with no fallback and a business model that assumes today's API pricing is permanent. Prompt injection will be the SQL injection of the next audit cycle.

The findings that don't change are the human ones. Missing product managers. Sprint rituals on autopilot. Backlogs nobody has the courage to prune. AI accelerates whatever a team already has. The organised get more organised. The rest ship the same choices faster. The bingo card will get new squares, but the free space in the middle will still be "documentation last updated in 2023".

If any of this sounds familiar, you're in very good company. We've met over 150 of your neighbours, and we've been the neighbour ourselves on more than one project. It's all fixable.

And by now, you should've somehow, realised what you gotta do.
However, if your deployment process involves a car, we should talk.