While performing the test sessions, testers collect and organize information so they can share it with the stakeholders or with the internal team.
In this article we'll cover the essentials of collecting data that can be turned into actionable information, so that we can improve the product or system we're building.
In order to share findings & evidence properly, we need to think about our goal: why are we doing it?
Maybe we're collecting information for ourselves. Or maybe we're thinking about reporting defects so the developers can have a look at them. Maybe we have an idea to share with the PO or with the team for further discussion. Maybe something really works surprising well and the team needs to know that. Knowing to whom we're collecting information is relevant as we may need tailored information for them.
Let's see a brief summary of information that we can collect during exploratory testing and how it can be organized.
Notes in the form of text are the most common way of storing our findings. These notes can be of different nature.
Testers use labels, or categories to distinguish text notes, to make them easier to find afterwards.
Next are some examples of possible labels/categories that are frequently used, along with their meaning:
Evidence consists mainly of some sort of files that we either extract from the application (e.g., logs, other artifacts) or that are somehow strongly connected to the system (e.g., screenshots, videos).
Audio recordings, usually taken more as a verbalized text note, can also be included in this section as they will be stored as a file that we can upload as evidence.
Evidence are like facts that are of great help later on, for example for bug analysis and fixing.
In sum, we can have:
Start with a few categories (e.g., Problem, Question, Idea, Praise) and adapt to your needs.
In the end, add the mindmap or a link to it, as evidence or as a note in your test session.
TBD
Take a screenshot and add some text, marking it as a Problem, Idea, or other. This will provide more context to the screenshot you're taking.
In XEA this is done at the same time you take the screenshot.
TBD
To provide greater focus and avoid some noise, record just the browser, or the application's UI. Record the whole screen if makes more sense for your use case (e.g., interaction of a desktop application with other applications).
TBD
Upload some file as evidence and add some text, marking it as a Problem, Idea, or other. This will provide more context to the screenshot you're taking.
Record the browser network traffic and upload it to XEA. In Chrome, this can be found in "Developer Tools > Network"; make sure to select "Preserve Log" option.
Later on, the .har file can be uploaded from the same place to be analyzed in further detail.
Add the .har file as evidence to your test session.