top of page
Search

Incident AI - Newsletter 08/25

  • Writer: ekin eraydin
    ekin eraydin
  • Aug 14
  • 3 min read

Welcome to the first edition of the Incident AI Newsletter – your quick update on how Incident AI is evolving to make your investigations faster, sharper, and more reliable.

We’ve been listening closely to feedback from users like you.


This month, we’re excited to introduce three powerful tools now available in Incident AI:

  1. Evaluation / Sense Check

Make sure your investigation findings hold up to scrutiny. Sense Check scans your analysis for logic gaps, inconsistent reasoning, and unsupported conclusions – so you can be confident your report will stand strong in any review. It seamlessly compares all of your edited and approved outcomes against the evidence. 


   2. Evaluation / Quality Check

A built-in peer reviewer for your investigation, AI driven PDCA cycle. Quality Check compares your investigation against industry best practices and your evidence, highlighting where your analysis meets, exceeds, or falls short of expectations.


    3. Evidence Stats - MAJOR UPGRADE 13/08

Know exactly what’s driving your conclusions. Evidence Stats gives you a clear breakdown of your evidence set – showing relevance, bias indicators, and coverage – so you can spot weak or missing evidence before you close the investigation. Clearly see which file doesn't belong to the incident if uploaded by mistake, or see how biased a witness statement is instantly. 


Relevance evaluation criteria:


- Direct connection to incident causation

- Alignment with investigation analysis results

- Contribution to understanding root causes

- Support for or contradiction of key findings

- Value for corrective action development

- Completeness and detail level


What does the score range mean?


  • 60–100% → Worth keeping. This means the evidence is clearly relevant to the incident. It either explains part of the cause, confirms facts, or helps build the picture of what happened.

  • 30–60% → Borderline. The connection is weak or indirect — it might give some background context, but it’s not essential. These are the ones you’d only keep if they serve a specific legal or completeness purpose.

  • Below 30% → Not relevant. The evidence doesn’t contribute to understanding the incident at all and is likely safe to exclude.


Bias evaluation criteria:

- Source credibility and potential conflicts of interest

- Emotional language vs. objective reporting

- Self-serving statements or defensiveness

- Consistency with other evidence sources

- Timing of evidence collection (immediate vs. delayed)

- Witness position, role, and involvement in the incident

What does the range mean?

  • 60-100% Green → Mostly objective, factual, from a highly credible source with no apparent conflicts of interest or self-serving slant.

  • Bias score around 30–60% Yellow → Mostly reliable but contains some natural subjectivity, possible defensiveness, or perspectives influenced by the witness’s role in the incident.

  • Bias score <30% Red → Strongly biased — heavy emotional language, self-protection, or contradictions with other evidence. Should be used cautiously and cross-verified.

ree
ree

Why these matter:


These tools aren’t just about speed – they’re about giving you confidence at every stage of your investigation. In the field, in the office, or in front of leadership, you can be sure your findings are robust, defensible, and aligned with best practice. We’ve prioritised these features because they also act as key human control points – helping you manage the potential risks of AI as you begin embedding Incident AI into your safety workflows.


What’s next:


We’re working on the next set of improvements, including:

  • Streamlined outputs and functionality

  • An interview voice recorder as an optional addition to the Interview Questions tool: collect evidence fast!

  • A photo library that lets you prepare incident briefs and outcome summary reports instantly.


Want to try these new tools?


If you’re currently trialing Incident AI, you’ll see them in your dashboard now. If you’ve enquired but haven’t started a trial, feel free to contact us. 

For some of you, looking forward to catching up at QMIHSC25, where Mine Guard AI is the AI sponsor. See us on booth 63 to experience these improvements first hand.

Or join our session on 20th if August on our new tool we are building: SHMS AI.

Here’s to better, faster, more defensible investigations.


– Mine Guard AI Team –

 
 
 

Recent Posts

See All
Incident AI - Newsletter - 09/25

Welcome to the second edition of the Incident AI Newsletter  – your quick update on how Incident AI is evolving to make your...

 
 
 

Comments


bottom of page