Imagine if airplane companies never tested their airplanes. They just ship them when they’re done. Would you feel safe boarding a plane like that?

Fortunately, airplanes are not built like that. Flying is statistically the safest way of transportation and there are strict regulations and tests airplanes must abide by.

Most people would agree to release a product without testing it is a crazy idea, whether you’re building airplanes, cars, or bicycles. We expect a product must satisfy a strict set of tests before being released to the public. If a company releases a product that turns out to be faulty, it is subject to public ridicule. The first question that comes to mind is “Did they even test their product?”

What about software? For some reason, many people still think it is ok to build software without testing it.

Someone will test it

When people talk about testing, what they usually talk about is automated tests, but let’s make one thing clear — every software program gets tested. Eventually, someone will run it and verify it’s output. If you are shipping software without testing it, you are letting your customers test it. As we saw with our airplane analogy, this is not a good idea and, hopefully, this is not what you are doing.

So you do test your software, but you just do it manually. You write the code for your feature and run your application. Then you go through a set of use cases or workflows to check if the application behaves as expected for each of them. Once it does, you ship it.

So what’s wrong with that? Why would you switch to automated tests?

Why automated testing?

Tell me if this sounds familiar. You write your feature and start checking if your application works as expected. But it doesn’t. There’s a bug so you fix it. You check again but find another bug so you fix it. After going through several (many) of these iterations everything finally works and you ship your code. As you are getting ready for launch your error reporting tool goes haywire. Turns out, when doing your last round of workflow check-ups you skipped one and missed a bug.

Manual testing is repetitive work and the problem with us humans is that we tend to forget things. Luckily, there are these things called computers that are very good at repetitive tasks. Why not use them to test our application? Once we give the computer our list of workflows to check, it will never forget to go through any of them.

Another problem with manual testing is it takes time. The more workflows you have to check the more time it will take. In a fairly complex software system, the number of possible paths that an application flow can take grows fast and testing them manually quickly becomes impractical or simply impossible. So you resort to testing only some of them. Computers are much faster at processing information than humans so they solve this issue too, allowing us to test our applications at dramatically greater speeds. The initial effort of writing extra code (an automated test) quickly pays off since the  tests run before every push to a branch, deploy, or commit.


If you do decide to start writing automated tests, you may get overwhelmed with all the different terminology and jargon used. In its essence automated software testing is simple — you write a program to test your program. So if your program is a calculator that does addition, you write another program that sends some input to your calculator. For example, we send numbers 2 and 3 and check if the output of the calculator matches our expectations (it should be 5). In reality, we are writing more complicated things than simple calculators but most of the time you will be testing if your program gives the expected output for a given input.

Other things you will often want to test are object/service collaboration and side effects. For example, you may want to know if your user registration service “told” the mailer service to send a welcoming email to the new user or if the user object was persisted in your data storage.

Keeping it efficient

As we pointed earlier, computers are much faster than humans, allowing us to test our applications with much greater speed, but as the number of tests rises, running (and maintaining) your tests can start to take time. This is why you want your test suite to follow a so-called Testing pyramid pattern.

The base of your pyramid is the low-level unit tests. Unit tests target single units in isolation (unit usually meaning a single class or path). As you go higher up the pyramid, your tests start to involve bigger “chunks” of your applications (components and modules) and with the highest level of tests booting up your whole application and testing it as a black box. The higher up the pyramid you go, the tests become slower and more brittle (making them harder to maintain) which is why you want a pyramid shape instead of a cone.

If you think about our airplane manufacturer analogy again, I don’t suppose they build the whole airplane and fly it just to check if the doors on the toilet work? This can be tested in isolation, as well as many other (more crucial) parts of the airplane like motors. Once they test each unit they can start assembling them into components and modules, testing them along the way and finally ending up with test flights that test the airplane as a whole.

Tests first?

What if I told you that you should not only write tests but you should write the test first before you write the actual working code? This is what the practice of Test-Driven Development (TDD) is about. You write your test code and then the working code that will satisfy that test. This often seems like a counter-intuitive idea to many people, especially if they are not writing tests at all. After all, how can you test something that you haven’t built yet?

It is useful to think of your test code in terms of specification. Having a specification that defines how a certain part of the software should behave is a much more intuitive idea, and in a way, your test is just that — a specification of how certain parts of your software should behave. If you think of it this way then writing tests (specifications) before writing the actual software code makes much more sense. After all, how can you write software if you didn’t specify how it should behave?

Do I need it?

Learning how to write good automated tests will take some time and effort, so do you really need it? I mean, it’s not like you’re building airplanes, right?

Let me rephrase the question from the introduction. Would you board an airplane whose software (that it heavily relies on to fly) is not tested?

Maybe your web application failing won’t have the same consequences as airplane software failing, but as a professional you vouch for the correctness and reliability of your software. While I am not claiming automated tests will make your software error-free, they are the best tool we have to raise the confidence in the correctness of our software. Unless constant failing is a requirement of your software, you need to write tests, because testing is too important to be left to humans.

Also read:

Got any questions? Start the discussion on Twitter