Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To build the corresponding model in TCD, you will need to forget (temporarily) some of the lessons about parameter & value definitions given different objectives. Instead of optimizing the scenario count, the goal of this data set is to become a representative sample of the real world and eliminate as much human bias as possible. This means not just data quality, but also completeness.

Image RemovedImage Added

Such a model would not only include all parameters regardless of the impact on the business outcome but also utilize lengthy, highly detailed value lists (often more than 10 per parameter). To distinguish between the review and the “consumption” formats, value names or value expansions can be adjusted accordingly (i.e., value name can be “sell some” for communication to stakeholders while the expansion can be “3” given the data encoding).

...

This phase is the closest to TCD’s “bread and butter”. The model would serve a dual purpose – 1) smoke testing of the AI; 2) integration testing of how it is operationalized.

Image RemovedImage Added


Given the execution setup, you would likely have to keep all the factors consumed by the AI system, but, for this phase, reduce the number of values based on the importance (both business- and algorithm-wise).

Scenario volume would still be largely driven by the “standard” integration priorities (i.e., key parameters affecting multiple systems) but the number of values and/or the average mixed-strength dropdown selection would be higher than typical.

Image RemovedImage Added


Focusing on the “just right” level of detail for the high-significance factors will guarantee the optimal dataset for sustainable AI testing.

...