Overview
Test automation is one of the three main testing activities in Testmo, besides manual test cases and exploratory testing. We've designed Testmo so it's very easy to submit your test automation results without having to create or manage your automation test suites and cases first.
Ready to send test results to Testmo?
Are you already familiar with Testmo's test automation or want to start quickly? If so, you can directly skip to:
You can directly start submitting your test automation results and Testmo will automatically receive, organize and link all your automation results. To get the most out of Testmo's test automation features, let's start by looking at the basic entities first.
Testmo test automation entities

Sources

To track and compare test results over time, it's useful to link similar tests and results. In Testmo, your test runs of the same tests are linked to the same source (you can think of sources as your test suites). You specify the source of your test runs as an argument to the Testmo CLI tool when you submit your test results.
So if you have a test automation suite for your backend API tests, you would submit the relevant runs with a source of the name backend-api for example. Testmo would then create a new source in the specified project if it doesn't exist already, and automatically link your subsequent backend API runs to the same source. This way you can easily compare tests over time and group/filter your runs more easily.
It's important not to mix the same source name for different tests/suites. You need to specify a different source for your different automation suites. E.g. typical examples would be:
  • backend-api
  • mobile-android
  • mobile-ios
  • mobile-performance-ipad
  • frontend-selenium
  • frontend-admin-webdriver
  • frontend-unit
  • etc.

Test runs

Whenever you run your automated tests as part of your CI/CD pipeline, from your build server or manually from the command line, you can submit the test results to Testmo. Each time you submit your automation run to Testmo, a new test run is created in Testmo. If you submit results from different sources/automation suites, you would submit these as separate test runs.
Likewise, if you execute the same suite multiple times in your CI/CD pipeline against different configurations, such as different web browsers or different mobile platforms, you would submit these as separate test runs.
Each test run can optionally also be linked to a configuration or milestone so it's easier to track, report and group your runs. Please see the relevant documentation on how to link runs to configurations and milestones.

Threads

Within test automation runs, your tests are grouped into threads. A test run always has at least one thread and can have multiple threads if you execute your automation suite with parallel CI/CD test jobs.
Threads can be useful if you have set up your CI/CD workflow to run your tests in parallel to speed up test execution. In such cases, you would submit the tests for each parallel test job as a separate thread. This is useful so you can see the test times of the different test jobs as well as track the overall success state, console output and additional fields.
Parallel test execution with threads
If you just have a regular non-parallel test automation run, you do not need to create threads yourself. Our automation:run:submit command will handle everything automatically and create a single thread and add all results to it.

Tests & results

All the test results you submit are stored together with the relevant thread/run. You can either see the tests and results filtered for a specific thread, or you can see all results of the run at once.
Your tests are also grouped together into folders, usually based on the classes/sections of your test automation suite. Testmo's CLI tool identifies the classes and test names from your test result files automatically. It also generates a unique key based on the folder (class) and test name so it can link future results of the same test.
It is possible to optionally add additional fields, artifacts and links to runs, threads and tests. This is useful if you would like to include relevant details such as version numbers, Git hashes or a link to your CI/CD pipeline. These details are then conveniently displayed in Testmo.
For individual tests, Testmo automatically detects and submits any custom fields for failed tests from your result files. You can learn more about submitting additional fields and links to threads and runs.

Sending automation results to Testmo

We've designed Testmo's test automation feature to make it as easy as possible to submit your test automation results. We provide a couple of options and additional documentation on how to do this.

Testmo CLI tool

In almost all cases, you would use Testmo's CLI tool to submit your test results. You can either start the tool directly from the command line (from any OS and environment) or call the tool from your CI/CD pipeline (see below).
The Testmo CLI tool can automatically parse and submit your test results from standard JUnit XML report files. Practically any test automation tool and framework can generate such report files (or it's easy to convert to this format), as this is the common format used by many CI/CD and automation reporting tools.
Learn more about Testmo's CLI tool.

CI/CD and build integration

Many teams automatically run their automated tests as part of their CI/CD pipeline when new code is submitted to the project's repository or as part of their nightly build runs. Submitting your test results from your CI/CD pipeline or build server is very easy. You would just use the above mentioned CLI tool to implement this.
We have additional guides on the CI/CD integration for common tools.

Custom API integration

Last but not least you could also use Testmo's REST API directly to implement a custom integration. This is not usually needed or recommended though, as Testmo's above mentioned CLI tool is an easier to use wrapper around the API with a lot of helpful custom logic.
Even if you would like to build a custom integration, it would usually be easier to just generate your result files and then call the CLI tool to submit your results.