Page Object Model

Page Object Model

Why they are important

Page Object Model (or pattern) is the de-facto model for how the developer should write automation tests. It’s a framework to separate the business logic from the simulated user actions to execute a workflow.

A page object is an abstract representation of a single webpage. The page object should contain the depiction of the elements on the page, found by locators. Methods within the page object should contain simple user actions (get, set, click, submit) using the locators. These actions can be woven together for a test to interact with a page.

  • Enter the username
  • Enter the password
  • Click the login button

These actions can be wrapped up in a method called login, which may accept a username and a password. This login method would belong to a page object called “LoginPage”. Together, this would look like

loginPage.login(username, password)

Now, any test that requires a user to log in can initialize the LoginPage object and call the login method. This pattern allows for a single page object to be used across many tests to reduce the amount of code needed to write tests. Following this pattern also reduces the required maintenance. Any change that gets made to a page object affects all tests that use that page object.

Creating Page Objects

Page object methods should represent actions or small sets of actions that a real user could do within a few seconds. Entering a string into a text box is an example of a good page method. Performing a long or complicated series of actions is not acceptable and should be broken up into smaller, more simple sets of actions.

Page-logic should be self-contained. Any logic which crosses the boundary into another page object probably means it is testing-logic and it should be extracted. In the spirit of reusing things, we’ll reuse the login page example. A bad idea would be to include a method called clickAccountSettings in the LoginPage class because that’s only an item to a logged in user. If the user is logged out, there are no account settings that can be interacted. An example of how to better do that is to have a second page object called LandingPage that contains the clickAccountSettings method. The test would then contain the following two lines:

landingPage = loginPage.login(username, password)

Hierarchy of a Page Object

A page object class should extend a BasePage. The BasePage should have locators and methods to interact with any common elements across all pages. In reality, some pages may not have a navbar (think of the login page). If the page object contains that, but the test author knows not to interact with it, that is fine. It’s a stylistic preference.

The other approach for that situation is to have BasePage and CompanyBasePage. CompanyBasePage inherits from BasePage. LoginPage would inherit from BasePage and almost all other pages from CompanyBasePage. This approach adds another layer of hierarchy that comes in handy if a single framework handles multiple applications. Suppose the company has Product A and Product B. The two products contain different web pages. The benefit of this approach would be that any function in BasePage can apply to both products – whether that’s a function to find items in a list or to open a database connection – the updates apply to both products.

Creating Locators

The easiest way to create a locator is to first find the element you want on a page using Chrome or Firefox.

  • Right-click on the element and select Inspect.
  • Right-click on the highlighted element in the Devtools tab and then click on Copy -> CSS Path

You’re now able to locate elements by CSS or XPath. Any matching elements get highlighted in the Devtools tab. You should use this method when creating a page to make sure the selectors you are creating are valid. The longer and more convoluted a locator is, the more likely it breaks with page changes. Try to use the simplest selector you can. From the order of most preferred to least preferred:

  • A custom HTML attribute such as “data-locator,” “se-locator” or whatever name you choose.
  • Id
  • CssSelector
  • Xpath
  • Name

Don’t use an element’s text to define its selector. While it may seem straightforward, it breaks each time the text changes. It’s also a non-scalable approach if A/B testing is involved, as the wording of text often changes. This approach also fails when testing multiple languages.

The Page Constructor

The page constructor is the only method in a page object that should contain assertions. This approach ensures that the page gets loaded with the correct information present. In additions to assertions, shared elements and objects should be instantiated in the constructor to reduce the re-fetching of those elements.

Things to check in a constructor

  • Are important elements visible?
  • Are the expected buttons correctly enabled/disabled?
  • Are the breadcrumbs displayed?

Example page constructor

By MENU_BUTTON = By.cssSelector("div[se-locator='menu']");

public SomePage(Webdriver driver) {
    menuButton = getWebElement(MENU_BUTTON);

Page Methods

Page methods should always return information to the user about the page or perform actions on the page. There should be no assertions within any of the page methods. For example, if the user entered text into a field, the method that put it there should return what was recorded. This process allows a test to check if all of the characters were entered.

Common Page Methods

Page methods acting as an interface between a single user action and a test means that the methods are pretty boring. You only check if something is visible or enter text a finite number of ways. Each page object will have methods that wrap the important fields on the page that need to be interacted with.


All of these page-specific methods boil down to relying on a few commonly used Selenium bindings. First the element is located (returning a WebElement object) and then the object is acted upon.

Test Level Overview

Test Level Overview

To have the best test performance you can get for the time you put into an application, you must be aware of the different testing levels in the testing pyramid. Unit tests should be the largest layer, and every piece of code that can be unit tested should be unit tested. A general rule of thumb is that if code is ever executed there is an expected outcome, and that should be tested. If you rely on code, you should rely on tests to make sure the code is working properly.

When the pyramid is ignored, our lack of restraint and bad practices create a testing ice cream cone. This is where there is a lot of manual testing for each release, some automated tests that are somewhat relied on, some weak integration tests, and the smallest amount of flimsy unit tests. It’s the antithesis of the testing pyramid. It is manual, slow, and may have large gaps in coverage due to the inverse of the way components are tested.


The smallest test you can make. It should test a single function, and it should have no external dependencies. There are no databases being spun up. Any and all web requests should be mocked out. This is truly garbage-in and garbage-out. There should be no need to start a server to run unit tests. Unit tests aren’t always the easiest things to write, but long term, they are what pay off with performance.

If the scenario at hand is testing password requirements, there are a multitude of tests that should be covered with unit tests, and fewer and fewer tests as you scale up the pyramid.

A few of those tests would be named something along the lines of

  • “Password length must be at least 6 chars”
  • “Password must contain at least 1 special symobol”
  • “Password must contain at least 1 uppercase and 1 lowercase character”
  • “Passwords must match”

These will run in milliseconds as unit tests. These would take many seconds to run if a test required a web browser to navigate to the login page and try to login with many different password variations. The unit tests don’t require a server to be up, while the e2e tests would.


Integration is a widely-encompassing term. It covers pretty much anything above unit test and below a live service test. An integration test can spin up any dependency in-memory and not reach out to existing services. An integration test may involve starting portions of an application.

A repeated example I’ve come across many times is spinning up an in-memory DB and making sure connectivity to the app works. An application may need to spin up an in-memory instance of Postgres and ensure that the application can perform any of the CRUD operations.

Some integration tests would be named something along the lines of:

  • “App connects to Postgres server”
  • “addUser controller is able to create a user”
  • “deleteUser controller is able to delete a user”

Live Service

A live service test is also called an API test. This is where you’re sending a request to a live server and getting a real response. There are no mocks happening here. Each test should only test a single endpoint. There may be a lot of code that is executed when an endpoint is hit, and there are probably unit tests and integration tests to cover what that code does. Service tests should not only cover the happy path scenario, but also some other edge cases that will absolutely happen in the wild. There are no web browsers involved, so these can execute as fast as your network can curl and as fast as the server can respond.

A few tests here would be named something along the lines of:

  • “createUser endpoint returns 201 when successfully creating a user”
  • “createUser endpoint returns 400 when required params are missing”
  • “createUser endpoint returns 403 when user is not authorized”

E2E Test

An end-to-end test, workflow test, Selenium test, or web test (it is called many different things) is the highest level of testing, and it is the most expensive. It requires some sort of infrastructure to run; the application servers need to be up, and a server to execute the tests also needs to be up. These can all run on a local machine, but they are still servers nonetheless.

They are the most brittle tests because they require locators to interact with elements on a page. Each time a page changes, locators may need to be changed as well, and the test can be broken. They can be cheap to write, but the cost of maintenance is high for a changing application.

They are also the best automated way to confidently be able to test a workflow that a user may perform. From logging in, to navigating on the page, to dealing with the rendering for the different browsers. Whether it’s Firefox, Chrome, or whatever version of IE or Edge you must support – this is the level it will happen at.

Some test titles for a workflow test would be something along the lines of:

  • “Admin is able to create a new user through the admin panel”
  • “Admin is able to delete a user through the admin panel”
  • “Customer is able to search for an item and add to shopping cart”
  • “Customer is able to purchase a single item in shopping cart”
  • “Customer is able to purchase multiple items in shopping cart”


I’m hoping to really hammer the point home here that tests are important, and part of that is choosing the appropriate levels for the tests to be executed at. The example here describes the pain of so many different apartments I’ve rented with the obligatory “Unit tests passed” image and the lack of care from apartment management.

(Unit) The drawer works fine.
(Integration) Drawer might open, depending on the size of the handles used
(Live) Drawer cannot open with other drawer in place
(E2E) I need to call the installer. I can’t believe I paid for this kind of service.

Data Interactions Within Tests

Data Interactions Within Tests

Correct data use is an integral part of scaling an organization on the software development front. Not knowing what a test needs is a warning sign that the test may fail in higher environments, thus reducing confidence in future deployments and increased time spent debugging failed operations. Thankfully, this is a solved problem with a simple paradigm. The number one thing that needs to happen is that each test should be idempotent. This means that it can be ran any number of times (system load aside) and should return the same result while not affecting other tests. To achieve this, each test needs to create and destroy its own data.

Data Creation

A test should not rely on pre-existing data being in the database. A test should not rely on certain data being arranged or built and already in the environment. Instead, a test should seed its own data.

Leverage test annotations depending on your data needs.

Data creation needs will change from scenario to scenario. A login can (in many cases) be shared among tests in the test class. Whether it’s a service level test or an end-to-end test, logging in once and passing that authentication around is usually a good practice. However, a unique transaction or purchase object may be needed for each specific test. Say, like something to test login attempts with invalid or revoked credentials.

The data that needs to be inserted into the database should not be direct database calls. That would be dangerous and difficult to maintain. Instead, the tests should directly call a controller or API endpoint. Utility (or helper) classes can be used to create complex objects that may require multiple other embedded objects.

An example of that is the need to test that an employee has an emergency number. You start by generating a phone number, assigning it to an employee, and that employee belongs to a department, and maybe that department to an organization. That would be a lot of work to maintain if raw SQL were being used. Instead, using endpoints to create the data that’s needed is much less maintenance in the future.

Note: To make the test data more unique (and visually appealing), you can leverage a library called Faker to create the random alphanumeric characters required.

Data should be created before any navigation occurs in a test. This ensures that an end-to-end test navigates to a page where it expects data, and the data is already created and not possibly still in flight. If the system is slow to create data, the amount of time spent in creation can cause a problem where data is expected on a page, but it’s not being drawn in because it’s still being added to the database. This can cause a flaky test due to the performance of the APIs. This is often easy to avoid, so let’s avoid it.

Data Use

The only data that should be created is the data that is going to be used for a test. Anything else that is created and goes unused will be a waste of time, resources, and will confuse the next person to edit that area of code. Any code that is left in the test that’s doing nothing adds needless complexity. If there isn’t enough data that is created, the test obviously won’t work correctly. Use only what you need, and cut everything else out.

Data Destruction

Much like the Scouts, we want to leave no trace. All data that is created during testing should also be destroyed. This is best served by creating an object in the BaseTest class (let’s call it DataCollector), and each time a test creates test data, it should be added to DataCollector.

Person person = DataCollector.add(new PersonBuilder("firstName", "lastName").build().execute());
System.out.println("Person ID is: " +

With every test extending a BaseTest, DataCollector can be called to destroy all of the data automatically for every test. Ideally, all of the API delete endpoints have cascading deletes. So a person is deleted, and then the department, and then the organization, referencing the example above. If they endpoints don’t support cascading deletes, you can set the order in which deletes occur from DataCollector by removing all of that type of object, followed by the parent object, and go all the way up the chain. It is more work, but it’s superior than sometimes, maybe, cleaning out the database by hand. A cron job to run a cleanup script isn’t bad either, but it’s less ideal due to having to wait for it.

When the test suite has finished executing, you can go through the list of objects attached and delete object by its id. When an object returns a 4xx response, you can often safely ignore it because some objects get deleted while testing the frontend.

The belt and suspenders approach to ensuring that the environment is clean is to run queries to ensure that the database is empty (sans seed data, of course). Run the tests and then allow the cleanup script to run. Running the initial queries once more should result in the same results – no extra data in the database. If that is not the case, then there are tests not using DataCollector or one or more of the delete operations are failing (due to execution order of deletion or the delete endpoints aren’t working correctly).

Variable Names

Variable Names

A variable name is a representation for a piece (or pieces) of data. It should accurately describe the underlying data.

Variable names should be meaningful. Seeing something called flag tells one it’s probably a Boolean, but doesn’t describe the intention whatsoever. You don’t know what the purpose of it is. a good variable name reduces ambiguity. It allows the developer to understand what the data represents. flag is rarely used correctly. stylizedFontsEnabled tell a much more detailed story about what is represented.

Sometimes it’s hard to come up with a descriptive name. Naming things is hard. However, it is a worthy time investment, and it should be taken seriously. It’s becoming increasingly common that a software developer will leave their job and go somewhere else. This means a new codebase, a new opportunity, and potentially a new nightmare. If you’ve ever had to sift through someone else’s code that didn’t make sense, then you may be able to empathize. Looking at code is a logic puzzle, and you should make it easy for the next person, or for your future self when something breaks.

Variable names may change over time as the context in which it operates changes. As a very simple example, imagine a form where a user must enter their name. Initially, you’d probably call the variable name. Over time, that form may change and accept a first name and a last name. Now, name is not a good variable name because it’s not descriptive as to which piece of information it should hold. Variables like firstName, lastName, or fullNamewould be better suited to describe the scenario.

Clean Code does a great dive into this topic.

Leveraging Annotations in Tests

Leveraging Annotations in Tests

The purpose of leveraging annotations (or hooks) is so you can set up an environment and have the appropriate amount of data seeded for the tests to use. Common JUnit hooks are covered here. Any modern testing framework leverages this basic model.

A quick rundown of when to use what:


BeforeClass (or equivalent) when you want something to execute once for the test class, before any tests run, and before the BeforeTest code runs.

This is where you set up your environment and prepare any data. Any data that is shared between all tests belongs in here.


Before will run before each test method. If you have 3 tests, whatever is in Before will run 3 times in total.

This is where data preparation should go if a fresh set of data is needed for every test. If this is an E2E (web) test, this is also where the browser should be opened and navigation to the appropriate page for testing should occur.


After runs after each test method.

This is where data cleanup should go if you are creating new data before each test.


runs after all of the tests have finished, and after the last execution of After

This is where data cleanup should happen for anything that was created in BeforeClass. If having a pristine environment is a goal, this is where you should try to delete anything that was created but not successfully deleted. If you’re unable to delete it still, you can log the information so it’s not lost. It may require someone to take a look into why something is failing. This often uncovers bugs around edge cases that weren’t thought about.

Method Names

Method Names

Creating a good name can be a difficult task at first, but it can pay dividends in the long run. The larger a codebase gets, the more complex it gets, or the more members of a team that are working on it – the more important having clear, concise method names become.

Methods should be named by the intention it serves. They should serve a single purpose and do a single thing. Methods should always start with a verb because it should be doing something. Methods should not accept a lone boolean to determine what it does. That would be better served as 2 (or even 3!) methods. Having a boolean inside of a method name is like saying doSomeTask(false). What? That isn’t clear. doSomeTask and doSomeOtherTask makes more sense.

Of course, good practices for naming methods are also applicable when talking about the names of test methods as well. Tests should be named according to what the user is trying to accomplish. When you see a failed test title, you should have a good idea what the test was trying to accomplish. A test method with a small scope and a clear, distinct name allow anyone to quickly and easily identify what it is doing.

thisShouldDoThat is usually a good approach to the name. If you’re having trouble deciding on how to name your test, think to yourself

  • Is this test trying to verify too much, and should it be broken up?
  • Is this a full test, or should this be part of another test?

A good indicator of a bad test name is if it includes any of the following words:

  • check
  • verify
  • test
  • assert
  • correct
  • right
  • good
  • okay

“Check”, “verify”, “test”, “expect”, and “assert” are bad words for a test because it is a test. If you’re not checking/verifying/testing/expecting/asserting on something, then it wouldn’t be a test. It’s redundant and unnecessary.

“Correct”, “right”, or “good” are bad words because somebody unfamiliar with a feature looking to a failure doesn’t know what they mean. You should be more specific as to what criteria makes it correct/right/good.

If a test is named eventStatusIsCorrect, how does someone who hasn’t worked on the feature know what correct is? eventStatusIsNotNull is an improvement. But again, we can go further. What happens that makes the event status not null? eventStatusIsNotNull_WhenStatusButtonIsClicked is a good test name. It says what is expected and what action takes place to make that expectation.

This is a carefully crafted list of bad test names, and generally bad practices that you can look at as reference of things not to do.

Clean Code also does a great dive into this topic as well.

Naming Test Classes

Naming Test Classes

Test classes should represent one story or even more likely, part of one story. Test classes with a clear and small scope allow anyone to quickly identify what the class is doing. It is completely valid to have multiple test classes with multiple tests per user story that your team delivers.

Like test method naming, test classes should be named based on the feature and it should be easy to understand what that test class is doing simply based on the name.

ForgotPasswordTests is an example. It’s not specific to any page, and simply by the name of it you immediately have a pretty good understanding what that test class is about. Going further, even this can be broken up or extended upon with additional test classes such as ResetPasswordTests, PasswordLockoutTests, PasswordCreationTests, PasswordStrengthIndicatorTests, and so on.

A good rule to follow would be that there should be at least as many test classes as your team has user stories, per sprint. Each of those classes should have multiple, smaller tests within it.

Code is Guilty Until Proven Innocent

Code is Guilty Until Proven Innocent

The importance of defensive programming

Defensive programming is a commonly used term. Wikipedia defines this as “a form of defensive design intended to ensure the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety or security is needed.” This are important principles to keep in mind when building something that will be released into the wild.

The example the immediately jumps into my mind is Jenkins. When Jenkins executes a build process it defaults to passing. Unless the process encounters an error or failing tests appear in the JUnit XML, the build was always be marked as passed. This is not defensive programming.

This has previously been the cause of a number of problems for me. Builds that should have been marked as failed were actually marked as passed. Builds that did nothing were marked as passed, and so I was able to continue thinking that everything was OK. Everything was not OK.

The way this currently works can look something like this:

Start job
Pull down some code
Unsuccessfully execute some script
Jenkins build result = passed

The correct way to handle this is to have the system fail everything by default, and don’t let anything pass until a condition is met. Jenkins should go through a code flow, and the flow should explicitly set the build to passed at the end.

Start job
Pull down code
Execute some script
If a specific condition == true
    Jenkins build result = passed

If the boolean condition doesn’t match what is expected, then the build won’t be set to passed. This would indicate that something went wrong during the process. Again, this would be the ideal way to do it, not what actually happens.

The way that Jenkins currently does this can be problematic for anyone, but especially problematic for anything mission critical. Example: A healthcare company performs batch processing for blood results every 15 minutes. The processing job will open a file containing information about bloodwork. In the file is the reason for the blood draw, what the expected range of values of the measured information is, and what the actual value was. Each person’s test result will be Positive, Negative, or Other (indicating a problem with the procedure). Now, someone performs a bad update on this script which no longer writes out each individual’s’ test result. When Jenkins runs this batch processing job, it will pass. It will fail only if the file that is being written out is being used later on in the job and isn’t there. However, if that’s not the case, everything will continue on and (hopefully) this failure will be caught somewhere down the line.

In Jenkinsfile (a DSL on top of Groovy that allows one to put each Jenkins job into code), the parameter is currentBuild.result and it defaults to passed. If the code is ever set to anything other than passed, it cannot be reset to passed. Cloudbees has made it more difficult to do defensive programming. The workaround for this system is to create a new variable, actualResult and default it to false. Code the job in Jenkinsfile as it normally would happen. At the condition where it is known if the build should be passed or failed, actualResult should be true or false. At the end set currentBuild.result = actualResult.

Outside of the awfulness that is Jenkins, there may be more common, simple real world examples to grasp this concept. When a form has a user input field that accepts a phone number, one would assume that the developers would put validation on the field. The field should, by default, say that the input (or lack thereof) is invalid. It should go through validation to prove that it is good. Without any sort of validation, “abcd” would be a valid phone number. That’s not what the database field will want to accept, and it should error out. Instead, the field should determine that there are the appropriate amount of numbers, an extension is or isn’t included, a country code is or isn’t present, or whatever else may be desired. It shouldn’t accept undesirable characters.

By assuming that code is guilty and it has to prove its innocence, by validating input, and checking expected conditions before blindly stamping a seal of approval (a passing build), a larger amount of safety and security will be baked into applications. This is important for mission critical systems, and it’s not any more difficult to put into smaller applications. Follow these practices and save yourself the headache later on. Always prove innocence in code.

Making Every Line Count

Making Every Line Count

Keep It Simple, Stupid

Every line of a test should be useful. If you can remove a single (non-assertion) line from a test, and have it still pass, then you’ve got a test that should change. When writing software, the mindset you’ll want to be aware of is KISS – Keep It Simple, Stupid. The idea is to keep it as simple and straightforward as possible. This concept also applies to writing application level code. A test performing unnecessary actions is undesirable. A good test will only test one thing, be explicit about it, and do it well by making every line count.

To demonstrate what a good test looks like, let’s make an example that applies brakes to a car and verifies that the car is slowed down.

public void carSlowsDown_whenBrakesArePressed() {
    Car myCar = new Car();
    myCar.setSpeed(50, MPH);
    myCar.pressBrakes(20, 6); // Press it 20% of the way down for 6s
    assertTrue(myCar.getSpeedInMph() < 50);

As you can see in this example, the test title expects the car to slow down after applying the brakes. We start by creating a car object followed by explicitly setting the speed. I have opted not to check that the speed is 50mph after setting it, because presumably that would be covered in another unit test. We apply the brakes for a period of time, and then we assert that the speed of the vehicle is less than the speed we started with in the previous line. Every line serves a purpose in this test.

Of course, not every test is this simple. That’s not to say that the tests can’t be made more simple, but it does take work. If the majority of tests were as simple and straightforward as this, then working with tests wouldn’t get the stigma that it often does. This can be something to strive for. When writing a test, or doing a code review, ask yourself “what is the purpose of this line?”. Cut away the fat and only include the necessary lines.