In today’s software development life cycle, test automation frameworks are an essential part of the testing process. The ability to integrate within the build-test-deploy pipeline is mandatory and running the tests on a developer’s machine to confirm a failure or extend the test suite, should be done with minimum configuration. So, how do you assure the quality of complex distributed systems? Which are the key elements to speed up testing that Adaptive’s QA experts use?
Most of the systems we develop consist of several subsystems. So when we have a project with a specific workflow and many integration points, it makes sense to do some test automation at the user-interface level, instead of testing the basics manually. By doing our test automation at the lowest level, we can guarantee that each function is working as expected.
Some of the most popular automation tools include Selenium (for testing user interfaces) and Postman (for testing APIs). However, in this article we’ll focus on UI test automation.
Before we start creating any test automation framework we need to design the system on which the tests will run. Consider the following factors for the system design:
- Separation of business logic, code, test data and configurations
- Ease of maintenance
- Reporting capability and accuracy
- Independence between the operating system, browser and environment
- Continuous Integration (CI) and pipeline integration support
Link all the requirements in a block diagram and you have something like this:
We now have all the requirements in place. To create a sample UI test environment framework, there are different frameworks we can use. Selenium WebDriver supports several other languages (see www.seleniumhq.org/about/platforms.jsp). However, we have used Java to build our framework for which you can find the sample code in our GitHub repo.
The key parts of our sample framework are as follows:
- Driver setup and helper classes and configuration
- Page object model implementation
- Test case examples
We now have a set of automated tests to run. However, to have a test framework we also need a way to run those tests effectively and test environment management is a huge part of it.
Test data management approaches
When choosing the best test data management approaches, there are two phases we need to keep in mind - creation and clean-up - although not all of them have it. In order to use the best approach, it’s crucial understanding the team’s constraints and aligning them with goals for the automation tests. For example, in the case of a shared test environment: how will restoring the initial data source affect testers on your team? This leads to create a dedicated QA test environment which might bring an additional cost.
The basic data handling
In the basic approach the automation code does not create the data that the tests will use. Moreover, this approach does nothing to clean-up data after each suite of tests.
While this tactic doesn’t work in most environments, nor with most software applications under test, it does serve as a basis for other patterns. This method would work if the test environment is isolated, disposable and the test does not need initial data to run. Furthermore, this approach would come in handy if the test application is a proof of concept and assumes that some initial data is populated as part of the product’s code . For example, to test admin or user accounts.
Data source re-creation
Another common solution would be to reset the data source that the environment is using preceding the test execution. It ensures that the same data is loaded to the system on each test run by providing a snapshot, which contains clean initial data.
The downside, depending on the amount of data and the type of database, is longer time of the execution and that it creates dependency on the type of the database used. Resolving and maintaining this problem might require specific technical skills from the test creators.
As with the basic approach, recreating the data source would work with some applications and environments under testing.
Test data generation
The third approach is to create a unique data set for each test case execution.
Whereas the data recreation strategy has a clean-up but no creation part, the test data generation has creation and no clean-up. Each test case precondition creates the data that it needs to verify a certain functionality. The possibility of encountering a race condition on data is far lower, since each test has its own unique data set to work with. The main concern with this approach is that the data builds up and might affect the performance of the system. If we need to address this issue by creating a clean-up postcondition which would affect in terms of speed of execution and complexity of maintenance.
The universal solution would be a combination of the three points and having a dedicated, disposable QA environment where only automation is run as part of the continuous integration.
As we now have an idea of how to building a UI Test Framework, we need to discuss on what type of environments the test will be running.
Distributed Parallel Testing
Running the whole test suite several times, based on the number of environments per requirement would be slow and critical feedback might come late. Setting up the right browser / OS combinations across many virtual machines (or – even worse – physical machines) and verifying that all of them are running correctly is a huge and time-consuming task; not to mention troubleshooting when something goes wrong on an individual node. Running tests in parallel would significantly decreases the execution time compared to execute them one by one.
Using Selenium Grid can simplify the testing of multiple OS/browser combinations. It’s also a great way to speed up your tests by running them in parallel on multiple machines. However, we need a way to configure and update our hub / nodes, as well as a way to quickly recover the system if any node crashes or otherwise ends up in a bad state.
To run distributed applications, specifically Selenium Grid, there are some paid tools; but why pay when you can use Docker instead?
Docker to the rescue
Docker is a free lightweight container that delivers a fast and configurable way to run distributed systems. By using this container, instead of running your grid across multiple machines or VMs, you can run them all on a single large machine.
To run the Selenium Grid with Docker, you’ll need to:
- Install Docker and Compose. On top of that, Selenium Grid is installed inside Docker, so don’t bother downloading it locally. Luckily both products have great documentation on this topic in https://docs.docker.com/installation/ and https://docs.docker.com/compose/install/.
The content of the ‘docker-compose.yml’ file should be as follows:
- Docker-Compose Up
- docker-compose up -d
- docker-compose scale firefox=5 chrome=5
By now you should have a Selenium Grid consisting of one hub, five Firefox and five Chrome nodes. You can see them running here: http://hostip:4444/grid/console.
Scale up or down
Want to scale your grid up or down? It’s easy, just type docker-compose scale firefox=20 chrome=20.
Too much? Scale it back down with docker-compose scale firefox=10 chrome=10.
Need to nuke everything and restart?
- docker-compose stop
- docker-compose rm
As we have previously mentioned, in today’s software development life-cycle, test automation frameworks are an essential part of the testing process where the integration within the build-test-deploy pipeline is mandatory and running the tests on a developer’s machine has to be done with minimum configuration.
To be able to achieve that you can use Selenium Grid to speed up a Continuous Delivery project by paralleling the suite of test cases and bring crucial feedback early in the process. Additionally, using Docker is an excellent way to build and destroy scalable, disposable environments and it’s a breeze to integrate with any CI.
This is just a small example on what we do as QA in Adaptive to create effective automated tests at UI level using open-source technologies as a base.
Senior QA Engineer, Adaptive Montreal