You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

What is a risk?

A risk is something that can impact (negatively or positively) the value of what we're aiming to provide.

Usually we talk about risks about event that can happen with a certain probability and can negatively impact some stakeholders (e.g., users) in a certain level.


Example

Using non-encrypted protocols can expose sensible information, including credentials, to attackers.


Risks can be handled in different ways: we can prevent/avoid them altogether, mitigate them, or simply accept them.

Testing is many times seen as a risk mitigation strategy, but it's also a strategy of preventing risks, depending on who and when is envolved while testing and the decisions that are taken upon the findings of testing.

Using risks to drive testing

We use testing to obtain quality related information. Risks and quality have a close relation. Risks can affect the value as seen by some stakeholder, at a given moment. In other words, risks can impact quality.If we implement a substantial feature without listening to users or having made some preliminary research, then there's a risk of that feature not addressing the needs of our users. If we use external libraries, then there's a maintenance and dependency risk that can expose us to more problems ahead.

When we test, whether implicitly or explicitly, we use risks to drive the experiments we perform.


Risk that...

  • the product and the feature doesn't match user expectations and needs
  • the feature claims are not met
  • feature purpose is not clear and thus may not be used
  • users get frustrated whenever using the new future
  • users feel difficulties to use the new future
  • feature is not consistent with other existing features within the product
  • the new feature impacts other existing features somehow
  • adds a considerable performance overhead
  • the brand and UI guidelines are not respected
  • it doesn't meet applicable law in all markets where the product may be subject to it
  • it doesn't comply with regulatory requirements
  • user data or the product itself can be accessed/used by non authorized users
  • the change is very hard to revert, operate, or monitor/observe
  • the product or the feature will be a major success with a major spike in usage
  • ...


In sum,

  1. Think about who are your users, external and internal. Don't forget the unexpected users;
  2. Think about what matters most to them;
  3. Think about what could go wrong.


Contextualizing risks

There is no straightforward  and ordered "checklist" of the risks we should tackle. That requires expertise from tester and... context!

It's impossible to tackle all risks; it's impossible to test "everything". Therefore, whenever testing we have to think about where we will invest our effort in, so that we cover aspects that can give us valuable information about quality.

Contextualizing risks with time

Software and the overall development process is not a localized event; its  a journey. It has a past, a present, and a future.

This means that software has an history that culminates in the present...

  • features that exist and that are used
  • features that exist and that are not used
  • unintended behaviours (e.g., bugs, feature subtleties) that are used as features
  • areas/features with technical and testing debt


It's important to understand...

  1. How was it used in the past?
    • Why?

      It gives clues about, among other:

      • features that users were using (and not using) and to what extent
      • flows that users were performing, and not performing
  2. How is it currently used?
    • Why?

      It gives clues about, among other:

      • features that users are using (and not using) and to what extent
      • flows that users are performing, and not performing
      • if there was a change in user behaviour and underlying needs
      • if there was a change in the software that may have affected the current usage behaviour
  3. How do we foresee it being used and evolving in the future?
    • Why?

      It gives clues about, among other:

      • concerns about performance and scaling
      • how to monitor/track success
      • how to quickly perform experiments with users
      • existing features that may need to be rethought or that may be affected somehow, and thus may need tailored testing

Contextualizing risks with needs

Software is made to address needs. Needs can be of different types though; they're not always "functional". What is the purpose that we're trying to achieve and to whom? Are there any existing references we should have in mind?

In general, we can say that a need is met if a certain goal can be achieved, effectively, efficiently, and with satisfaction.

Sometimes we focus our attention around effectiveness/correctness, which ultimately lead us to look at written specifications, acceptance criteria, or claims. While correctness may be crucial for banking and financial products, it's not as much relevant for a social kind of application, where UI and UX matter most.

But needs exist not only for external users but also for internal stakeholders and even for the team supporting and developing the product. Are we using deprecated dependencies or dependencies will well-known security issues, for example? Can a component or a service provider easily be replaced by another one? Is our infrastructure properly tracked, is it's setup scriptable, and are those scripts prone to error handling, for example?

Contextualizing risks with current practices

Is testing currently just implemented at the surface? Are areas/features covered with automated test scripts? What level and type of tests (e.g., unit, integration, system, functional, performance, security)?

Knowing where we stand and depart from in terms of testing, tell us a bit about potential risks that can exist.

What are we covering already? To what extent? Do we have quick feedback loops about it? Do we have an history about it?

Remember the unknowns

The risks that we are aware of, we can handle somehow, including with more or less testing depth.

We can group risks in a matrix, knowledge related, to give us a better understanding of the risks that exist and how we can approach them: 

  • We can check the risks we know we know
  • We can check & explore the risks we know we don't know
  • We can pair with others to tackle unknown knowns, and expose our biases or things we assume or skip without knowing
  • We can keep learning and exploring to uncover unknown unknowns

From risks to test charters

Say we have selected a risk.. how can we turn into a test charter?

Let's consider the following test charter template as basis; remember that this a template, it's not strict so you can adapt it freely to match your needs.


Charter template

Explore <area, feature, risk>

with/using <resources, restrictions, heuristics, dependencies, tools>

to discover <information>

Adapted from Maaret Pyhäjärvi, Elisabeth Hendrickson


Since we'll be performing a testing session, it will implicitly be limited in time, resources, and depth.

First, we have to think about the scope of what we aim to test broadly speaking, i.e. the subject of our testing. Do we want to perform the testing around a specific feature? About a subset of an existing feature? Around a flow? The whole product? If the later, then probably you'll need to refine your scope and limit it further.

Second, what will we bring to the testing session? Are there any resources, tools or heuristics that can help us? Are there are restrictions that can be used to restrict the scope of our testing or to amplify the probability of finding problems on the subject of our testing?

Third, what kind of information do we want to find? Do we want to find problems around the risk in general that we have identified? Won't that be too broad? Maybe we need to refine it.


Remember that we're talking about risks. As such, there's a probability of them occurring and the impact if they occur.

Our test charters should be around maximizing probability in one side but also consider situations that have a relatively low probability but still have a major impact.

A few additional tips

  • Internals
    • listen to your team, pair with other team members including developers to expose risks that otherwise could escape
  • Product Context & Market
    • listen to your users, to understand what's important to them, what they most value, and the things that frustrate them the most; these will give ideas of potential risks
    • listen to your business and what's important to them, to be on the same page and don't ignore aspects that ultimately matter to your company and management
    • experiment similar products, as you'll gain knowledge about what common and sometimes implicit expectations users may have whenever using your own product
  • Success
    • try to understand what "success" means to different stakeholders
    • understand "where the money comes from" vs "what are the common user/usage flows"
  • Background Knowledge
    • learn more about quality attributes, to become aware of different aspects that people value differently; these are the different dimensions of quality that we can have in the back of our mind, that inform us about what quality aspects can be targeted by risks
    • learn more about heuristics tailored for testing, as these can be used to provide some ideas for test charters but also ideas for many diverse experiments during the test session itself



  • No labels