The Analysis feature helps you to understand specific interaction coverage after every scenario. It ensures proper Test coverage, identifies less impactful Test cases, and optimizes Test selection for efficiency.
By using Analysis, you can make data-driven decisions to improve quality while minimizing risk and resource use.
Coverage Matrix refers to a method of evaluating the completeness of your Test scenarios by analyzing the combinations of parameter values. It ensures that each pair of parameters is adequately tested according to the specified combination strength (i.e., 2-way).
|
Graph Coverage assesses the flow and transitions between different Test scenarios or states. It ensures that all possible paths or sequences through a process are tested, particularly focusing on state-based or workflow-driven Tests. This type of coverage is useful for identifying gaps in the logic or flow of complex systems and ensuring that all transitions are properly validated.
|
The Analysis coverage charts can be extremely useful in answering questions like: "How much coverage is each of my tests adding?" and "How much testing is enough?"
It may take a few minutes to understand the valuable information presented in the charts. The number of Parameters and Values entered in the Parameters screen determines how many total possible pairs of values exist in your Test model. See the example with eight values (Figure 7):
Figure 7 - Values
Given these inputs, your model will have exactly 24 possible combinations of pairs of values, as shown below:
Total pairs to be tested (24) | |||||
---|---|---|---|---|---|
Large & Heavy | Small & Heavy | ||||
Large & Light | Small & Light | ||||
Large & Purple | Small & Purple | Heavy & Purple | Light & Purple | ||
Large & Green | Small & Green | Heavy & Green | Light & Green | ||
Large & Hexagon | Small & Hexagon | Heavy & Hexagon | Light & Hexagon | Purple & Hexagon | Green & Hexagon |
Large & Circle | Small & Circle | Heavy & Circle | Light & Circle | Purple & Circle | Green & Circle |
The first Test case (Large / Heavy / Purple / Hexagon) will test for six of the 24 possible pairs (Figure 8).
Figure 8 - Pairs
Coverage of pairs after test 1 (6/24 = 25%) | |||||
---|---|---|---|---|---|
Large & Heavy | Small & Heavy | ||||
Large & Light | Small & Light | ||||
Large & Purple | Small & Purple | Heavy & Purple | Light & Purple | ||
Large & Green | Small & Green | Heavy & Green | Light & Green | ||
Large & Hexagon | Small & Hexagon | Heavy & Hexagon | Light & Hexagon | Purple & Hexagon | Green & Hexagon |
Large & Circle | Small & Circle | Heavy & Circle | Light & Circle | Purple & Circle | Green & Circle |
So after the first Test case, the coverage chart will show that 25% of the total possible pairs that could be tested in this simple example have actually been tested at this point (Figure 9).
Figure 9 - Interactions
The second scenario (Small / Light / Purple / Circle) will test another six pairs. Importantly, none of these six pairs of values have been tested yet. In our first two Tests, we will have tested a total of 12 pairs of values.
Figure 10 - Pairs
So after two Test cases, the chart shows that 50% of the possible pairs (e.g., 12 tested out of 24 possible) have been tested (Figure 11).
Figure 11 - Interactions
Why do coverage charts start off with a steep trajectory (with lots of added coverage per Test) only to flatten out towards the end (with only a little added coverage per Test)? Analyzing Test number 3 shows us why (Figure 12):
Figure 12 - Pairs
There is no way to select values that will test six new pairs, as we did in each of the first two tests. The best we can do is test five new pairs and one previously tested pair. In this third test, "Large and Hexagon" has already been tested in the first test.
After Test 3, we have now tested 17 of the 24 total possible pairs. The coverage chart shows 70.8% (versus 75% if we had been able to include six new pairs in Test 3; Figure 13).
Figure 13 - Interactions
What’s going on with the final two Test cases? We achieved 25% coverage of pairs in Test 1 and Test 2, so why do Test 5 and Test 6 only add a small 4.2% increase each?
The final two scenarios add only a tiny amount of coverage because we managed to test all but two pairs of values in the first four Test cases. It will take at least two more Test cases to cover those last two remaining pairs.
The only pair tested for the first time in Scenario 5 is "Small and Hexagon." The only new pair tested in Scenario 6 is "Large and Circle." This is just one-sixth as many new pairs in each test compared to the first two Tests.
Figure 14 - Pairs
The likelihood of finding a new defect in Test number 5 or 6 is much lower than finding a new defect in Test case 1 or 2.
To Consider When Analyzing Coverage Information First, when used correctly and thoughtfully, the coverage information can be extremely useful. It provides a quick method for objectively assessing, "How much additional testing coverage am I achieving with each new test?" and "How much testing is enough?". Many Testing teams use a rule of thumb, such as stopping the execution of Xray Test Case Designer-generated tests after achieving 80% coverage, since they can clearly see diminishing returns to further testing beyond that point. The second point to keep in mind is cautionary. It would be a mistake to look at the graph, see that 100% coverage has been achieved after the final Xray Test Case Designer-generated test, and conclude that these tests cover everything that should be tested. 100% of what coverage? An analysis chart generated by the Xray Test Case Designer, like all software testing coverage reports, is an imperfect model of what should be covered (which itself is based on an imperfect model of the system under test). There may be significant aspects of the system that were not included in the Parameters screen. One or more of these excluded aspects - such as hardware configuration, software configuration or plug-ins, the order in which actions are executed, whether a user is navigating with a mouse or keyboard, or whether "submit" buttons are clicked multiple times quickly - could potentially cause defects that your current test set might not identify. |
If you have questions or technical issues, please contact the Support team via the Customer Portal (Jira service management) or send us a message using the in-app chat. |