Overview

Despite AI’s rapid rise in popularity, according to Stack Overflow’s 2023 Developer Survey, only 55.17% of respondents feel interest in trying AI for software testing, and just 2.85% trust AI tools enough. 

We have followed the AI trend pretty closely in our blog, including the high-level trend summary with pros & cons as well as the use cases in exploratory and automated testing. If you haven’t checked those posts out, you can take a chance to do that.

If you are curious enough to give AI a try in your QA practice, in this collection of tips we will help you understand the “how to” aspect of the following workflow:

  1. Process a Jira story
  2. Generate tests with AI based on that 
  3. Move those tests to Xray

We assume the security limitations have been taken care of in a way that simultaneously protects the IP and allows the use of private data to create more helpful responses.

Process a Jira story

Starting with the end goal in mind, we’d want to have a prompt that sufficiently but concisely describes the requirement to the GenAI tool for test case creation. Of course, we can try to upload stories into the AI as-is (without changing the structure, level of detail, etc.), but small adjustments based on the “stories as prompts” approach can go a long way towards improving the AI output. The reason is that the current AI performance is highly dependent on your problem decomposition skills, i.e. your ability to clearly describe the scope and priorities of the task. 

There is plenty of prompting advice out there from e.g. Google, OpenAI, and it will likely take you a few iterations to get it right for your use case. Here, we will just highlight a couple of points:

  • Consider using AI itself to reformat, refine, and/or create stories (read more in our latest blog post).
    • One possible use case is described by Zapier.
    • This suggestion includes the possibility of asking AI to help you with prompts for itself. 
  • Evaluate the tradeoffs of integrated solutions vs external tools.
    • Atlassian is working on AI tie-ins for Jira, which could help with convenience and security.
    • External tools like ChatGPT could have an advantage in power and versatility but would require more effort and expertise in moving the story data.
  • Make sure project-specific context is included in the story description with the phrases like "as a [role/behavior]" and "for the [company/system type]".
  • Provide explicit definitions of the business rules and constraints.
  • Consider grouping stories with tight relations/dependencies before passing them to the GenAI tool. It can help with both the context and the overall coverage.

Generate tests with AI based on that

With the story in the most effective format, it is time to let the AI do the heavy lifting. Generative models can create test cases that explore a wide range of possible scenarios and input combinations, including edge cases. Overall, prompting tips for how exactly you ask for test cases are similar to the first step. We will focus on the aspects that are fairly unique for the Xray flow. 

Looking at the end goal again, you need to identify the type of tests you are looking for across 2 dimensions (based on what is available on your Xray instance):

  • Format - Manual, Cucumber, Generic, or Exploratory.
  • Purpose - new testing effort, regression. 

Depending on the classification, you have to consider the following nuances:

  • For the Manual type, you will need to determine the content breakdown between Action, Data, Expected Result, and Precondition. A sample table from an Xray manual test (or its CSV variant) could be helpful.
    • We recommend starting with content only in Action and Expected Result. See the CSV example in the last section below.


  • For the Cucumber type, you will need to specify Scenario or Scenario Outline version. A sample of the Examples table for the Outline type could be helpful.
    • You may have to use chained prompts to generate the Outline. First, ask for BDD test scenarios with test data/parameter options in-line (within test steps). Then, ask to modify the output using the Gherkin option “Scenario Outline” with the table like below.


Examples:

| parameter1 | parameter2 | …

| value1         | value 1        |…

| value2         | value 2        |...


  • For the Generic type, you will need to specify the language, its version, any environment peculiarities. Sharing a code snippet as additional evidence could be helpful.

  • For the Exploratory type, the task will be about generating ideas rather than step-by-step instructions, so the prompting will need to be adjusted accordingly. You can see some of the tailored verbiage examples in this guide.

  • Regression tasks could benefit from additional evidence appended to the prompt (or to the source story), such as anonymized behavior patterns from the production app, previous release documentation and test suites, etc.

 

Also, we would like to highlight a couple of important settings that are sometimes overlooked:

  • You can ask AI to add a brief description of the reason for generating a particular test (with prompts like “add an explanation of the importance of this test case”  or “add a summary of why this test case is needed”). It can help both your analysis and prompt iterations. You can see an example in this article by Jason Arbon.


  • If you are using GPT, you can change the default “helpful assistant” role in the OpenAI playground, which may affect the “expertise” of the responses.


Over time, AI is capable of adapting the test generation (and other) tasks to fit your particular business needs, which improves the accuracy and relevancy of suggested test cases for you specifically. With that said, fully autonomous test generation is not quite here yet. The collaborative approach with human support will ensure comprehensive and effective training for AI.

Move those tests to Xray

We need our tests in the format compatible with the Xray import process; in this article we will discuss two routes - test case importer with CSV and API. For both routes, keep in mind the need for the metadata fields based on your Jira/Xray configuration, e.g. Priority, Reporter, Severity, etc. We recommend handling the format as part of the chained prompting rather than the initial generation request, to make the iteration on the previous step faster and easier. 

CSV

A phrase like “format the dataset as a csv using an example below” with a limited sample from our import guide (Cloud, DC) helps accomplish this step. You may need to use variables to help carry over the test case content and provide additional instructions about the layout.

Issue Id,Issue key,Test type,Test Summary,Action, Data, Result 

1,,Manual,name of your first test,first step, , first expected result

1,,,,second step, , second expected result

2,,Manual,name of your second test,first step, , first expected result

2,,,,second step, , second expected result


If you have access to plugins, there is e.g. a CSV Export one for GPT-4 which makes downloading the file formatted for Xray import a bit easier. If you do not, copy and paste the results into a text editor and save as CSV.

API

A phrase like “return the results in a json file using an example below” with a limited sample from our import guide (Cloud, DC, DC - Cucumber) helps accomplish this step. You can then ask GenAI to submit the output to Xray API based on the instructions from the same guide.

Eventually, depending on the level of technical expertise, you can establish a flow with minimal UI interaction where you call e.g. ChatGPT API from a scheduled automation script to request stories via Jira API, generate any type of tests from them, and push the output to Xray API.

 

We believe these tips will help you integrate GenAI tools into existing Jira/Xray workflows more smoothly, feel free to share your experiences, challenges, and successes.


  • No labels