Quality Testing

Test Level Overview

To have the best test performance you can get for the time you put into an application, you must be aware of the different testing levels in the testing pyramid. Unit tests should be the largest layer, and every piece of code that can be unit tested should be unit tested. A general rule of thumb is that if code is ever executed there is an expected outcome, and that should be tested. If you rely on code, you should rely on tests to make sure the code is working properly.

When the pyramid is ignored, our lack of restraint and bad practices create a testing ice cream cone. This is where there is a lot of manual testing for each release, some automated tests that are somewhat relied on, some weak integration tests, and the smallest amount of flimsy unit tests. It’s the antithesis of the testing pyramid. It is manual, slow, and may have large gaps in coverage due to the inverse of the way components are tested.


The smallest test you can make. It should test a single function, and it should have no external dependencies. There are no databases being spun up. Any and all web requests should be mocked out. This is truly garbage-in and garbage-out. There should be no need to start a server to run unit tests. Unit tests aren’t always the easiest things to write, but long term, they are what pay off with performance.

If the scenario at hand is testing password requirements, there are a multitude of tests that should be covered with unit tests, and fewer and fewer tests as you scale up the pyramid.

A few of those tests would be named something along the lines of

“Password length must be at least 6 chars”
“Password must contain at least 1 special symobol”
“Password must contain at least 1 uppercase and 1 lowercase character”
“Passwords must match”

These will run in milliseconds as unit tests. These would take many seconds to run if a test required a web browser to navigate to the login page and try to login with many different password variations. The unit tests don’t require a server to be up, while the e2e tests would.


Integration is a widely-encompassing term. It covers pretty much anything above unit test and below a live service test. An integration test can spin up any dependency in-memory and not reach out to existing services. An integration test may involve starting portions of an application.

A repeated example I’ve come across many times is spinning up an in-memory DB and making sure connectivity to the app works. An application may need to spin up an in-memory instance of Postgres and ensure that the application can perform any of the CRUD operations.

Some integration tests would be named something along the lines of:

“App connects to Postgres server”
“addUser controller is able to create a user”
“deleteUser controller is able to delete a user”

Live Service

A live service test is also called an API test. This is where you’re sending a request to a live server and getting a real response. There are no mocks happening here. Each test should only test a single endpoint. There may be a lot of code that is executed when an endpoint is hit, and there are probably unit tests and integration tests to cover what that code does. Service tests should not only cover the happy path scenario, but also some other edge cases that will absolutely happen in the wild. There are no web browsers involved, so these can execute as fast as your network can curl and as fast as the server can respond.

A few tests here would be named something along the lines of:

“createUser endpoint returns 201 when successfully creating a user”
“createUser endpoint returns 400 when required params are missing”
“createUser endpoint returns 403 when user is not authorized”

E2E Test

An end-to-end test, workflow test, Selenium test, or web test (it is called many different things) is the highest level of testing, and it is the most expensive. It requires some sort of infrastructure to run; the application servers need to be up, and a server to execute the tests also needs to be up. These can all run on a local machine, but they are still servers nonetheless.

They are the most brittle tests because they require locators to interact with elements on a page. Each time a page changes, locators may need to be changed as well, and the test can be broken. They can be cheap to write, but the cost of maintenance is high for a changing application.

They are also the best automated way to confidently be able to test a workflow that a user may perform. From logging in, to navigating on the page, to dealing with the rendering for the different browsers. Whether it’s Firefox, Chrome, or whatever version of IE or Edge you must support – this is the level it will happen at.

Some test titles for a workflow test would be something along the lines of:

“Admin is able to create a new user through the admin panel”
“Admin is able to delete a user through the admin panel”
“Customer is able to search for an item and add to shopping cart”
“Customer is able to purchase a single item in shopping cart”
“Customer is able to purchase multiple items in shopping cart”


I’m hoping to really hammer the point home here that tests are important, and part of that is choosing the appropriate levels for the tests to be executed at. The example here describes the pain of so many different apartments that I’ve rented with the obligatory “Unit tests passed” image and the lack of care from apartment management.

Test Level Description
Unit Drawer works fine
Integration Drawer might open, depending on the size of the handles used
Live Drawer cannot open with other drawer in place
E2E I need to call the installer. I can’t believe I paid for this kind of service.

Drawers can't open