GraphWalker addresses State Transition Model-Based Testing; in other words, it allows you to perform modeling around states and transitions between those states using directed graphs.
With AltWalker, automation code related to our model can be implemented in Python, C#, or other. In this approach, GraphWalker is only responsible for generating the path through the models.
Let's clarify some key concepts, using the information provided by GraphWalker's documentation that explains them clearly:
- edge: An edge represents an action, a transition. An action could be an API call, a button click, a timeout, etc. Anything that moves your System Under Test into a new state that you want to verify. But remember, there is no verification going on in the edge. That happens only in the vertex.
- vertex: A vertex represents verification, an assertion. A verification is where you would have assertions in your code. It is here that you verify that an API call returns the correct values, that a button click actually did close a dialog, or that when the timeout should have occurred, the System Under Test triggered the expected event.
- model: A model is a graph, which is a set of vertices and edges.
From a model, GraphWalker will generate a path through it. A model has a start element, and a generator which rules how the path is generated, and associated stop condition which tells GraphWalker when to stop generating the path.
Multiple models can interact with one another (i.e. jump from one to other and vice-versa), using shared states (i.e. vertices that have a "shared name").
Each model has an internal state with some variables - its context. Besides, and since GraphWalker can transverse multiple models, there is also a global context.
We can also add actions and guards to the model, which can affect how the model is walked and how it behaves:
Mapping concepts to Xray
Besides other entities, in Xray we have Test issues and "requirements" (i.e. issues that can be covered with Tests).
In GraphWalker, the testing is performed continuously by walking a path (as a result of its generator) and until certain condition(s) is(are) met.
This is a bit different from traditional, sequential test scripts where each one has a set of well-defined actions and expected results.
We can say that GraphWalker produces dynamic test cases, where each one corresponds to the full path that was generated. Since the number of possible paths can be quite high, we can follow a more straightforward approach: consider each model a Test, no matter exactly what path is executed. Remember that a model in itself is a high-level test idea, something that you want to validate; therefore, this seems a good fit as long as we have the means to later on debug it.
What about "requirements"?
Well, even though GraphWalker allows you to assign one or more requirement identifiers to each vertex, it may not be the best suitable approach linking our model (or parts of it) to requirements. Therefore, and since we consider the model as a Test, we can eventually link each model to a "requirement" later on in Jira.
In sequential scripted automated tests/checks, we look at the expectation(s) using assert(s) statement(s), after we perform a set of well-known and predefined actions. Therefore, we can clearly say that the test scenario exercised by that test either passed or failed.
In MBT, especially in the case of State Transition Model-Based Testing, we start from a given vertex but then the path, that describes the sequence of edges and vertices visited, can be quite different each time the tool generates it. The stop condition is not composed by one or more well-known and fixed expectations; it's based on some more graph/model related criteria.
When we "execute the model," it will walk the path (i.e. go over from vertex to vertex through a given edge) and perform checks in the vertices. If those checks are successful until the stop condition(s) is achieved, we can say that it was successful; otherwise, the model is not a good representation of the system as it is and we can say that it "failed."
This example has been ported from GraphWalker+Java to AltWalker+Python and the full source-code is available here.
- Target SUT (PetClininc sample application):
- Java 8
git clone https://github.com/SpringSource/spring-petclinic.git cd spring-petclinic git reset --hard 482eeb1c217789b5d772f5c15c3ab7aa89caf279 mvn tomcat7:run
- Test code (source-code and additional details here)
- GraphWalker 4.2.0
- AltWalker 0.2.7
- Altom's Model-Editor or GraphWalker Studio
How can we test the PetClinic using MBT technique?
Well, one approach could be to model the interactions between different pages. Ultimately they represent certain features that the site provides and that are connected with one another.
In this example, we'll be using these:
- PetClinic: main model of the PetClinic store, that relates several models provided by different sections in the site
- FindOwners: model around the feature of finding owners
- Veterinarians: model around the feature of listing veterinarians
- OwnerInformation: model around the ability of showing information/details of a owner
- NewOwner: model around the feature of creating a new owner
As mentioned earlier, models can be built using AltWalker's Model-Editor (or GraphWalker Studio) or directly in the IDE (for VSCode there's a useful extension to preview it). In the visual editors, namely in AltWalker's Model-Editor, we can use it to load previously saved model(s) like the ones in petclinic_full.json. In this case, the JSON file contains several models; we could also have one JSON file per model.
The following picture shows the overall PetClinic model, that interacts with other models, and also the NewOwner model.
If we use the visual editors to build the model, then we need to export it to one (or more) JSON file(s).
Note: if you use GraphWalker Studio instead, it allows you to run the model in offline, i.e. without executing the underlying test automation code, so we can validate it.
Let's pick the NewOwner model as an example, which is quite simple.
"v_NewOwner" represents, accordingly to what we've defined for our model, being on the "New Owner" page.
If we fill correct data (i.e. using the edge "e_CorrectData"), we'll be redirected to a page showing the owner information.
Otherwise, if we fill incorrect data (i.e. using the edge "e_IncorrectData") an error will be shown and the user keeps on the "New Owner" page.
As detailed in AltWalker's documentation, if we start from scratch (i.e. without a model), we can initialize a project for our automation code using something like:
$ altwalker init -l python test-project
When we have the model, we can generate the test package containing a skeleton for the underlying test code.
$ altwalker generate -l python path/for/test-project/ -m path/to/models.json
If we do have a model, then we can pass it to the initialization command:
$ altwalker init -l python test-project -m path/to/model-name.json
During implementation, we can check our model for issues/inconsistencies, just from a modeling perspective:
$ altwalker check -m path/to/model-name.json "random(vertex_coverage(100))"
We can also check verify if the test package contains the implementation of the code related to the vertices and edges.
$ altwalker verify -m path/to/model-name.json tests
Check the full syntax of AltWalker's CLI (i.e. "altwalker") for additional details.
Besides, faker is also used to generate test data that will be used by the model (e.g. whenever filling data on the edges).
Actions performed in the edges are quite simple. Assertions are also simple as they're only focused on the state/vertex they are at.
In the previous code, we can see that each model is a class. Each one of those classes must contain methods corresponding to the related edges and vertices; methods should be named in the same way as the names assigned for the edges and for the vertices in the model.
To run the tests using a random path generator and stopping upon 100% of vertex coverage, we can use AltWalker CLI tool such as:
However, that would only produce some debug output to the console.
If we aim to integrate this in CI/CD, or even have visibility of it in a test management tool such as Xray, we need to generate a JUnit XML report.
However, AltWalker (as of v0.2.7) does not yet provide a built-in JUnit reporter.
Luckily, we can implement our own code to run AltWalker as it provides an open API. This code is available in the script run_with_custom_junit_report.py, which can be found the repository the sample code of this tutorial.
This code makes use of a custom reporter that can generate JUnit XML reports in two different ways:
- mapping each model to a JUnit <testcase> element, which ultimately will be translated to a Test issue in Xray per each model
- mapping the whole run to a single JUnit <testcase> element, considering the whole run as successful or not; in this case, it will be lead to a single Test issue in Xray
The previous runner's code above produces these two reports, so we can evaluate them.
After successfully running the tests and generating the JUnit XML report, it can be imported to Xray (either by the REST API or through the Import Execution Results action within the Test Execution, or even by using a CI tool of your choice).
Each model is mapped to JUnit's <testcase> element which in turn is mapped to a Generic Test in Jira, and the Generic Test Definition field contains the unique identifier of our test; in this case it's "model.<name_of_model>". The summary of each Test issue has the name of the model.
The Execution Details page also shows information about the Test Suite, which will be just "AltWalker".
Alternate JUnit XML generation (all-in-one/single testcase)
If we generate the JUnit XML report with a single <testcase> element for the whole run of our model, we would have just one Test created in Xray. It would be globally passed/failed.
Our complete model is abstracted to a Test issue having a Generic Test Definition (i.e. its unique identifier) as something as "models.<customizable_in_the_reporter>".
- Use MBT not to replace existing test scripts but in cases where you need to provide greater coverage
- Discuss the model(s) with the team and the ones that can be most valuable for your use case
- Multiple runs of your tests can be grouped and consolidated in a Test Plan, so you can have an updated overview of their current state
- After importing the results, you can link the corresponding Test issues with an existing requirement or user story and thus truck coverage directly on the respective issue, or even on an Agile board
- Visual model editor for AltWalker and GraphWalker
- AltWalker Model Visualizer for VSCode
- Actions and Guards (from AltWalker's documentation)
- AltWalker examples (Python and C#/.NET)
- AltWalker CLI
- Port of PetClinic MBT example to AltWalker and Python (code for this tutorial)
- GraphWalker models for testing the PetClinic site (source-code)