top of page

APPLIED RESEARCH

Framing matters at Incident AI

Generative AI assistants can now review thousands of pages of evidence in seconds—a game-changer for incident investigators who must deliver timely findings while operations remain suspended. Most existing framings lean on the control-centric paradigm: locate missing or failed defences, highlight individual non-compliances, recommend engineering fixes. In contrast, Human-and-Organisational Performance (HOP) / Safety II thinking argues that incidents emerge through normal adaptations, trade-offs, and information-flow breakdowns.

ChatGPT Image Jun 8, 2025, 08_03_01 PM.png

Comparative effects

We conducted an in-depth experiment using the same highwall-failure evidence set to see how a large-language model (LLM) inside Incident AI behaves when supplied with:

  • Framing A – “Generic”: classic barrier/error framing.

  • Framing B – “HOP/Safety II”: 120-word appendix instructing the model to surface system pressures, adaptations, and learning gaps...read full paper here.

  • One by one analysis here. 

Qualitative Study

Using the same highwall-failure evidence set, we generated two independent large-language-model (LLM) investigations inside Incident AI: a Generic framing (barrier/error framing) and a HOP / Safety II framing (systems-interaction framing).Beyond previously reported structural deltas, we applied four qualitative lenses—Agency & Attribution, Causal-Chain Depth, Bias Scan, and Concept Saturation...read full paper here.

ChatGPT Image Jun 8, 2025, 08_13_07 PM.png
bottom of page