Automated Regression Testing: Building a Test Suite with Gauge, Docker and Selenium

Krishna Chaitanya Acondy
Full-stack developer with experience in Agile. Well versed in Scrum.

A short while ago, I was involved in setting up the infrastructure for a test suite for automated regression testing, including integration into our CI/CD pipeline. This was a slow and painstaking task, since there is hardly any documentation or guidance available online which covers the entire process end to end. So I decided to write this article to provide assistance to anyone trying to build automated tests into their development process.

Table of Contents

  • Challenges you face while ensuring quality in building software
  • Automated Regression Testing to the Rescue!
  • Automated Regression testing with Gauge and WebDriver 🎯
    1. Installation and Scaffolding
    2. Adding WebDriver and Configuration
    3. Setting up Abstractions
  • Dockerize! 🐳
  • CI Integration 🚀
  • Conclusion

Challenges you face while ensuring quality in building software

Over the years, I have worked on a number of web development projects using different technology stacks, in multiple domains. In most of these projects, as the codebase builds up over time, it becomes increasingly difficult to ensure quality working software. Possible reasons are –

  • Poorly defined requirements/acceptance criteria
  • Inability to strongly link requirements to the implementation
  • Inadequate test coverage at all levels
  • Too much of a focus on feature development, no time to address upkeep and technical debt.

Over time, this results in an application that looks somewhat like this:

A chimera, as described in Homer’s Iliad. Circa 1590-1610
A chimera, as described in Homer’s Iliad. Circa 1590-1610

Software projects generally have a regression testing period towards the end of each release, to ensure that all existing functionality is still intact. With every new feature added, the regression testing period gets extended. In some cases, I have seen regression cycles that lasted 3 to 4 weeks!

This is in complete contrast to the CI/CD mindset which emphasises releasable code with each change. To achieve this, we need to test the application continuously during the development cycle. Also, the testing cycle has to ensure that both new and existing features work fine. 

Automated Regression Testing to the Rescue!

In my current project, we realised quite early on that we would need to first set up the tool chain and infrastructure for testing, so that our developers can build quality software with confidence. This initial phase is arguably the most important step for any project. It will result in huge savings in development costs over time, while ensuring quality in a sustainable way. 

Automated functional testing is an important component in this process, which will give us the following benefits:

  • Continuous testing of all requirements
  • Early feedback about bugs and breaking changes
  • Repetitive execution of manual test cases can be automated
  • QA engineers are free to take up more exploratory testing

In addition, over time, this will build up a full test suite for automated regression testing, which will drastically cut down the amount of time required to perform regression testing on an application.

In this article, we will discover how to set up a framework for automated UI tests using Gauge, WebDriver, Docker and Jenkins.

The application I am currently working on is a web app with an Angular front-end and dotnet core APIs on the back-end. We use Jenkins for our CI/CD pipelines and build our software using Docker containers to ensure consistency of the built artefacts.

I will be covering the set up for this web app specific to dotnet, however, you can borrow these learnings to any other stack in general.

Automated Regression testing with Gauge and WebDriver 🎯

Before we started building our test suite, we investigated multiple tools, frameworks and test runners, and eventually zeroed in on Gauge. We chose Gauge because – 

  • Gauge supports Dotnet Core.
  • Has a lightweight core which is extensible via plugins.
  • It lets you define the specs either as Gherkin using Given-When-Then scenarios, or as markdown, which is more free-format.
  • Comes with a fully featured CLI, which works great for our CI environment.
  • The inbuilt test runner allows for running subsets of tests grouped by different attributes.

Installation and Scaffolding

To get started, install the Gauge CLI on your machine using npm.

npm install -g @getgauge/cli

You need to install the Gauge Dotnet plugin to be able scaffold up Dotnet Core projects.

gauge install dotnet

Then, you can create a Gauge Dotnet Core project using

gauge init dotnet

This gives you a basic dotnet core project with a few gauge-specific additions:

0_YkM6Plbbo8uNqkYK (2)-mprgs

  The project includes a manifest.json file which specifies the Gauge plugins required, and a couple of .properties  files which set up a host of configurable attributes. You can run the specs using the Gauge CLI, with the command  

gauge run specs

So far, so simple.

The project also includes a specs  directory, with an example.spec  file. This file contains sample scenarios specified in markdown, and the StepImplementation.cs  file defines a class with functions that implement the steps in these scenarios.

Adding WebDriver and Configuration

To be able to test our web application’s functionality, we will need to include Selenium WebDriver and the supporting NuGet packages :

  1. Selenium.WebDriver
  2. Selenium.WebDriverSelenium.Support
  3. Selenium.WebDriver.ChromeDriver

In addition, we will also require a configuration file which specifies the application’s URL and any other information that may be needed. We can use the APIs from the Microsoft.Extensions.Configuration package to read in the configuration from an appsettings.json file.

Now, having set up all the required dependencies, we need a way to open up the browser, navigate to the application URL and tear down the WebDriver session after the tests have been run. Gauge provides certain lifecycle hooks such as BeforeSuite  and AfterSuite  which we can leverage for this purpose. Because of the way Gauge instantiates classes at runtime, the easiest way to provide reusable functionality is to expose it as static functions in a globally available class. We can call this the ‘Test Context’.

using System.IO;
using Gauge.CSharp.Lib.Attribute;
using Microsoft.Extensions.Configuration;
using OpenQA.Selenium;

namespace AutomatedUiTests
    public class TestContext
        public static IConfiguration Configuration { get; private set; }
        public static IWebDriver WebDriver { get; private set; }
        public static string ApplicationUrl => Configuration["ApplicationUrl"];

        public void Setup()
            Configuration = BuildConfiguration();
            WebDriver = CreateWebDriver();

        public void TearDown()

        public static void GoToLandingPage()

        public static IWebElement CurrentPage => WebDriver.FindElement(By.TagName("body"));

        private static IConfigurationRoot BuildConfiguration() =>
            new ConfigurationBuilder()

        private static IWebDriver CreateWebDriver()
            var chromeOptions = new ChromeOptions();
            var driverFolder = Path.Combine(Directory.GetCurrentDirectory(), "gauge_bin");
            return new ChromeDriver(driverFolder, chromeOptions);

A possible enhancement here would be to add configuration to specify the web driver type, and instantiating the right driver based on the config value.

  "WebDriver": {
    "Type": "remote",
    "Chrome": {
      "Headless": "true"
    "Remote": {
      "HubUrl": "http://localhost:4444/wd/hub"

  "ApplicationUrl": "http://localhost:8080"

Setting up Abstractions

One of the eventual objectives of this test framework is to involve the QA engineers in the automated regression testing. To achieve this, we will need to architect this project to make the actual tests read as much like plain English as possible. We can do this by setting up the right abstractions to test our application. 

We can divide our application into different ‘pages’ (even though it is a single-page application, we still have different routes). Each page will be composed of one or more components. This composition-based approach is highly scalable, and provides a flexible and powerful way of modelling our application.

Anyone who has worked with Selenium WebDriver will know that it is notoriously flaky. It is very important to build in the right waiting mechanisms at the correct levels of abstraction so as to make the tests reasonably robust against timing issues. To this end, it is vital to set up base classes for our components and pages. These classes will provide shared functionality and abstract away implementation details, while providing a clean API for more specific components and pages.

using System;

namespace AutomatedUiTests.Components
    public interface IComponent
        void WaitFor(TimeSpan timeSpan);
        void WaitUntil(Func<bool> condition, int timeoutValue);
        void Hover();
    public abstract class BaseComponent : IComponent
        public virtual bool IsDisabled => !WebElement.Enabled;
        public bool IsDisplayed => WebElement.Displayed;

        public virtual string Text => WebElement.Text;
        protected IWebElement WebElement { get; }
        protected IWebDriver WebDriver => ((IWrapsDriver) WebElement).WrappedDriver;

        protected BaseComponent(IWebElement element)
            WebElement = element;

        public string GetAttributeValue(string name)
            return WebElement.GetAttribute(name);

        public virtual void Hover()
            var action = new Actions(WebDriver);

        public void WaitUntil(Func<bool> condition, int timeoutValue = 30)
            var wait = new WebDriverWait(WebDriver, TimeSpan.FromSeconds(timeoutValue));
            wait.Until(driver => condition.Invoke().Equals(true));

        public void WaitFor(TimeSpan timeSpan)

This BaseComponent class provides a means to interact with the UI component on screen. In addition, it provides a means to wait for a certain condition to be fulfilled via the  WaitUntil  function. It also provides an arbitrary wait, although this must only be used very sparingly, and as a last resort, since arbitrary waits will increase the running time of your tests, and this can quickly add up to a significant chunk of time. Also, an arbitrary wait that makes a test pass on your machine may not necessarily work elsewhere.

Any class that inherits from BaseComponent will expose functionality that will be specified in terms of how a user would interact with that component. For example, we could have a MenuComponent  which may expose functions for each option in the menu, such as OpenSettings()OpenProfile(), and LogOut() .

We can also have a BasePage class for pages in our application.

namespace AutomatedUiTests.Pages
    public abstract class BasePage
        public string Url => WebDriver.Url;

        public void Close()

        public void Refresh()

        public virtual bool HasLoaded()
            var result = DocumentReady((IJavaScriptExecutor) driver);
            return result.Equals("complete");

        public virtual void WaitFor(TimeSpan timeSpan)

        public virtual void WaitUntil(Func<bool> condition, int timeoutValue = 30)
            var wait = new WebDriverWait(WebDriver, TimeSpan.FromSeconds(timeoutValue));
            wait.Until(driver => condition.Invoke().Equals(true));

        public virtual void WaitUntilLoaded()
            var wait = new WebDriverWait(WebDriver, TimeSpan.FromSeconds(10));
            wait.Until(driver => DocumentReady((IJavaScriptExecutor) driver));

        public string Title => Header.Title;

        protected BasePage(IWebElement element)
            _webElement = element;

        protected IWebElement FindElement(By locator)
            var wait = new WebDriverWait(WebDriver, TimeSpan.FromSeconds(30));
            WebDriver.ExecuteJavaScript("arguments[0].scrollIntoView();", _webElement.FindElement(locator));
            return _webElement.FindElement(locator);

        protected IEnumerable<IWebElement> FindElements(By locator)
            return _webElement.FindElements(locator);

        protected Header Header =>
            new Header(_webElement.FindElement(By.TagName("xpl-header")));

        private readonly IWebElement _webElement;

        private IWebDriver WebDriver => ((IWrapsDriver) _webElement).WrappedDriver;

        private static bool DocumentReady(IJavaScriptExecutor executor) =>
            executor.ExecuteScript("return document.readyState").Equals("complete");

This provides similar functionality at the page level, and also includes ways to find elements on a page. Any class that inherits from BasePage  may contain multiple user-defined components as instance members, and can provide means to interact in meaningful ways with multiple components. For example, if you have a page that displays a list of items, you could have functionality such as SelectAll(), OpenItem() and so on.

We can use this set up in combination with a library such as FluentAssertions. This will enable the tests to be written in language as close to plain English as possible, making them super easy to reason with!

Dockerize! 🐳

Now that we have a framework in place for our automated UI tests, let us take things one step further and make the tests run in a Docker container.

This great little Docker image by my colleague Mike McFarland provides a linux installation with Dotnet Core and Gauge preinstalled. That takes care of most of the set up, and leaves only the task of setting up our own application. The application and any of its dependencies can be started using a docker-compose.yml file. Here is an example file that also creates a Selenium Grid hub and a node that runs the Chrome browser.

version: '3.7'

    container_name: web-app-${DOCKER_SUFFIX}
    image: <Path to your app's docker image>
      - automated-ui-tests
  # Include any other components of your application here

    container_name: selenium-hub-${DOCKER_SUFFIX}
    image: selenium/hub:3.141.59-neon
      - 4444
      - GRID_TIMEOUT=120000
      - automated-ui-tests

    container_name: chrome-node-${DOCKER_SUFFIX}
    image: selenium/node-chrome:3.141.59-neon
      - selenium-hub
      - HUB_HOST=selenium-hub
      - automated-ui-tests

    name: automated-ui-tests-${DOCKER_SUFFIX}

  The ${DOCKER_SUFFIX} can be something such as a timestamp that guarantees a unique name for the network.  

  This network of containers can be brought up using docker-compose up. We can run the specs against the Gauge Dotnet docker image using the following command:  

docker run --name automated-ui-tests-${DOCKER_SUFFIX} --rm \
--network automated-ui-tests-${DOCKER_SUFFIX} \
-e ApplicationUrl=http://web-app-${DOCKER_SUFFIX} \
-v \$(pwd):/workspace \
-w /workspace mikemcfarland/gauge-dotnet \
/bin/bash -c 'gauge run Specs/; rtn=\$?; chown -R 1002:1002 .; exit \$rtn'

  This runs the command gauge run Specs  inside the Gauge Dotnet docker container. The current folder (outside the container) is made available as the workspace. It also sets the Application URL as an environment variable, which overrides the value present in the appsettings.json. So, our tests are now running against the spun-up instance of our app within the same Docker network.  

This is great, because it:

  1. Neutralises any differences there may be between different dev machines.
  2. Always runs the tests against the exact same configuration.
  3. We don’t need a deployed instance of our application running on a physical server.

This set up can bring up the entire environment required, on any machine that has Docker installed, run the tests, and tear it all down after!

CI Integration 🚀

CI integration is a very important piece of the puzzle. It will make or break our ability to run a full test suite for automated regression testing. Dockerizing our test runs makes it super simple to integrate into the CI pipeline. It’s just a matter of bringing up the docker containers, and executing the command from the previous section. The only additional step is to publish the HTML report generated by Gauge.

def dockerArtifactoryRegistry = "your-docker-registry-here"
def timestamp = new Date().format("MMddHHmmss", TimeZone.getTimeZone('UTC'))

pipeline {
  agent {
    node {
      label 'docker'
      customWorkspace "automated-ui-tests"

  environment {
    DOCKER_SUFFIX = "${timestamp}"

  stages {
    stage('Run tests') {
      steps {
        sh "docker-compose pull"
        sh "docker-compose up -d"

        dir("src/AutomatedUiTests") {
          sh """
            docker run --name automated-ui-tests-${DOCKER_SUFFIX} --rm \
              --network automated-ui-tests-${DOCKER_SUFFIX} \
              -e WebDriver__Type=remote \
              -e WebDriver__Remote__HubUrl=http://selenium-hub-${DOCKER_SUFFIX}:4444/wd/hub \
              -e ApplicationUrl=http://web-app-${DOCKER_SUFFIX} \
              -v \$(pwd):/workspace \
              -w /workspace mikemcfarland/gauge-dotnet \
              /bin/bash -c 'gauge run Specs/; rtn=\$?; chown -R 1002:1002 .; exit \$rtn'
  post {
    always {
      sh "docker-compose down --remove-orphans"
      sh "docker image prune -f"

        allowMissing: false,
        alwaysLinkToLastBuild: true,
        keepAll: true,
        reportDir: 'src/AutomatedUiTests/Reports/html-report/',
        reportFiles: 'index.html',
        reportName: 'Gauge Report',
        reportTitles: ''])


This Jenkins pipeline can then be connected to either your code repository using a webhook, or can be scheduled to run at specific intervals.

There we have it! We have now set up a framework for automated UI testing with Gauge and Selenium, dockerized it, and successfully integrated into the CI/CD pipeline.


This provides a solid platform on which to build quality software. However, it will require slight changes to the development process to be able to provide maximum value:

  • Including automated tests in the definition of done for user stories
  • Adjusting estimates to reflect the added effort of developing tests
  • Acceptance from product management that it takes slightly longer to deliver features at a higher quality

We can now build up a full test suite for automated regression testing in parallel with the actual application. This involves a significant effort upfront to set up the framework, but will pay for itself many times over in the long run.