Not every company can have a “move fast and break things” attitude. In fact, even Zuckerberg, who installed this as a mantra at Facebook in 2012, revised that motto to “move fast with stable infrastructure” back in 2014. Customers, especially paying enterprise customers, will not tolerate frequently unavailable products or don’t live up to the new standards set by big tech companies.

As products grow and become more complex, engineering teams struggle to oversee the impact of the changes they make. The first reflex is often to introduce a testing phase between the development work and the deployment, a phase with a duration that often quickly grows from a couple of hours to one or more days. During this phase, any work the development team does is held back, or, even worse, the team sits idle until it’s done.

The old way of testing – regressions and manual testing

There are, of course, some obvious solutions such as turning the testing phase into a team-effort and having everybody help out. This is a very valid option, but companies often consider engineers too expensive to “waste” time on testing.

Teams using Gitflow continue pushing changes on their staging branch and create fixes for the release on the main branch. The catch here is that this work quickly piles up. While Gitflow addresses when a  team has to deploy hotfixes without deploying all other work by branching from master, it does not address the fact that more and more work is added to the staging branch. The next deployment will contain a lot of changes and will therefore be extra-risky.

This way of working introduces a rhythm where QA engineers sit idle for most of the time and are stressed for short bursts in between. At the same time, the engineering team keeps building things and only discovers bugs or wrong assumptions just before the release. It’s an assault on everyone’s health.

At madewithlove, we are strong believers in close collaboration between quality assurance (QA) and engineering teams. Having QA involved from the beginning can help you avoid unpleasant surprises towards the end.

The new way of testing – refinement and risk

The role of a QA engineer then no longer is testing if everything (still) works, but instead it is to surface any risk before development. They do this by describing the expected behavior of the broken or new functionality in a way that removes any ambiguity. This results in engineers who know what the QA engineer will test and will make sure their implementation takes this into account. If you have a group of responsible developers, they will even test it themselves.

The difference with old-style manual testing could hardly be bigger. Too many QA engineers hide the way they test the application from the team. Testing truly is a black box for everyone outside of the QA team.

Their reasoning is that engineers who know how their work will be tested will optimize for it. But with well-written acceptance criteria, isn’t that exactly what you want? One could argue that developers will hardcode certain things in order to pass the tests, but if that happens, you may want to consider hiring a new team.

A focus on quality

If the QA engineer is no longer testing every new feature or changed behavior, what are they doing then? The time they’ve saved can be spent attending refinement meetings, improving feature documentation and acceptance criteria on new features, and by performing exploratory testing. They can even go over recorded sessions of users to understand where they struggle. Time spent on guarding the quality bar can now be spent on raising it.

There’s definitely a transition phase where the QA engineers have to focus on both refinement as well as manual testing, and this may prove to be harder than you anticipate. Make sure to give your QA team the time to attend those refinement meetings and prepare the work better. Urge the rest of the team to jump in for the manual testing for a couple of cycles until the balance is restored. If anything, it will improve their focus on quality.