Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

One or more tests can be defined at the request level, or even at the whole collection level.

Image Modified

Pre-request scripts may be useful as a means to initialize some data before the test or to implement some test setup code.

...

The collection contains a request per each endpoint, where each request has one or more tests.

Image Modified   

In the previous example, we can see two tests: one for validating a successful HTTP request based on the status code and another that checks the response's JSON content. 

...

The collection (or a subset of its tests) can be run using the Collection Runner.

Image Modified  Image Modified  Image Modified

The runner shows the overall count for the number of passed and failed tests. We can also see the assertion error on failed tests; in this case, saving the response (setting the proper flag above) can help us better understand what is happening.

...

In this case, we'll obtain a public link to it.

Image Modified   Image Modified 


Then we need to decide which Newman reporter to use. Newman provides a built-in JUnit reporter; however, better alternatives exist such as junitxray or junitfull.

...

We could eventually fill/identify the Test Environment to associate to the Test Execution based on the Postman's Environment being used if it would make sense for us to analyze the results on a per-environment basis.

Image Modified


A Test Execution will be created containing results for all tests executed. Actually, in our specific case and only for demonstration purposes, two Test Executions would be created due to the fact that we're generating two JUnit XML files from the different Newman reporters. 

...

  • After importing results, you can link Test issues to existing requirements or user stories, so that you can track coverage on real-time directly on them
  • You can map Postman's environment to Xray's Test Environment concept on Test Executions if you want to have visibility of the results on a per-environment basis
  • Multiple iterations/executions can be linked to an existing Test Plan, whenever importing the results
  • If you run the tests multiple times with "newman -n <number_of_iterations>" parameter, multiple entries will appear within the Results section of the Test Run execution screen details
    • Image Modified

References