You've got an app, and it's finally picking up traction. The early days of brittle releases and functional bugs are behind you, and the system is starting to feel somewhat mature. But gradually, another kind of bug sneaks in: the product starts to feel sluggish.
Performance issues are not so clear-cut as functional bugs or system errors. When a user presses a button and gets an error message, it's clear that something is off. These are the kind of bugs that users actively report.
Performance bugs, on the other hand, are often underrepresented. People don't tell you when a feature is a bit slower than it used to be. That doesn't mean they aren't frustrated. It means they don't feel like it's worth complaining about. They expect the service desk to ask whether they have turned their computer off and on again.
Typically, users experience the system slowing down until they reach a breaking point. By then, they will not tell you that the report management screen is slow. They will angrily shout that the entire product is unusable. And the rest of the users will support them resoundingly.
Performance degradation is one of those poisons that creeps up on you and slowly erodes user trust in the product.
The solution is simple: build an all-encompassing test suite that checks every response time and every screen for degraded performance. Run it every 5 minutes across all environments. The downside is that this kind of perfect solution is prohibitively expensive for most of us.
If you're reading this newsletter, you're not looking for perfect. You're looking for pragmatic.
So, how do you start tackling performance before it becomes an issue?
Exploratory testing
Engineers usually have a feeling for which parts of the system are slow, but it can't hurt to quantify this. We want to look at slow requests that are frequently executed by many users. To an engineer, a sales report that takes 10 minutes to generate feels like a good candidate for performance tuning. But if it's only executed quarterly by a single person, this isn't the low-hanging fruit we're looking for. I remember a client with a terribly slow dashboard where fixing a single wasteful database query made it feel blazingly fast for all users. That's what we are after.
Tools like Sentry can provide amazing insights into slow database queries and API calls. If you have no other tools in place, this is the perfect way to get started. List the worst offenders and plan a few days to let an engineer chip away at them.
Use representative test data for development
Another one of my customers had a straightforward SaaS application that handled above-average amounts of data. The typical screen would work with thousands of rows. That's nowhere near Big Data territory, and most modern laptops don't break a sweat at that level. But each new feature introduced some kind of performance bug. What worked in the staging environment came to a halt in production.
The problem was that engineers and testers worked with a small dataset. They would build a page, test it with a dropdown of 3 users and mark that as done. In production, that dropdown was rendered with 5000 users!
We decided to modify their test fixtures to work with a data size that was roughly twice what they could expect in production. Overnight, engineers discovered those performance issues early and fixed them before going live.
If your users work with a large amount of data, make sure your engineers do too.
Generate API-level tests
Testing each endpoint in isolation is only part of the solution. Screens can still feel slow even though the API is blazingly fast. If your front-end sends thousands of requests to the backend, it's going to feel unresponsive.
However, API-level testing is simple and gives us some hard metrics to play with. We can expect each REST call to be completed within 250ms, for example.
Frameworks like locust make outside-in load tests simple. They simulate hundreds of users calling an endpoint and can run in your CI/CD pipeline or as a nightly cron job. The beauty of these tools is that they are so limited in scope that they are almost trivial to write. Log in, call the endpoint, and check that it returns HTTP 200 within the expected timeframe.
This is one of those tasks you can easily offload to an AI agent. Create one example and let Claude Code generate the rest. You can have your entire API covered with performance tests in a few hours.
Again, not the perfect all-encompassing solution, but a great, efficient way to get started.
Performance testing, like unit testing, is a bit of an art. It's a lot of extra work we can never make time for until we wish we had. These three tips will not give you the state-of-the-art tools to squeeze every last bit of performance out of your product. But they will give you a cheap, effective way to build a solid foundation.
If your system starts to feel slow, use tools like Sentry, let the engineers play with real data sizes and let Claude generate most of the load tests.
Member discussion