Using the hierarchy of intervention effectiveness to improve the quality of recommendations developed during critical patient safety incident reviews

This study evaluated the Hierarch of Intervention Effective (HIE) for improving patient safety incident recommendations. They were namely interested in increasing the proportion of system-focused recommendations. Data came from over 16 months. Extracts: Ref: Lan, M. F., Weatherby, H., Chimonides, E., Chartier, L. B., & Pozzobon, L. D. (2025, June). Using the hierarchy of intervention… Continue reading Using the hierarchy of intervention effectiveness to improve the quality of recommendations developed during critical patient safety incident reviews

ChatGPT is bullshit

This paper challenges the label of AI hallucinations – arguing instead that these falsehoods better represent bullshit. That is, bullshit, in the Frankfurtian sense (‘On Bullshit’ published in 2005), the models are “in an important way indifferent to the truth of their outputs”. This isn’t BS in the sense of junk data or analysis, but… Continue reading ChatGPT is bullshit

Problems with Risk Matrices Using Ordinal Scales

This covers some core problems with risk matrices. It’s argued that while they’re established tools, appearing to be “authoritative, and intellectually rigorous”, this “could be just an illusion …bred by the human bias of uncertainty aversion and authority bias”. Hence, matrices have “many flaws” that can “diminish their usefulness to the point where they become… Continue reading Problems with Risk Matrices Using Ordinal Scales

ChatGPT in complex adaptive healthcare systems: embrace with caution

This discussion paper explored the introduction of AI systems into healthcare. It covers A LOT of ground, so just a few extracts. Extracts: ·   “This article advocates an ‘embrace with caution’ stance, calling for reflexive governance, heightened ethical oversight, and a nuanced appreciation of systemic complexity to harness generative AI’s benefits while preserving the integrity of… Continue reading ChatGPT in complex adaptive healthcare systems: embrace with caution

The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership

Can adopting workplace AI technologies adversely affect employee psychological distress and depression? Yes according to this study. Online surveys of 381 employees in S.Korean companies was used. Background: ·        “In AI-centric environments .. AI reshapes jobs and workflows, affecting workers’ psychological health, satisfaction, commitment, and performance, as well as broader organizational outcomes” ·        “While AI adoption affects… Continue reading The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership

An Empirical Study of the Anchoring Effect in LLMs: Existence, Mechanism, and Potential Mitigations

This study found that several LLMs are fairly easily influenced by anchoring effects, consistent with human anchoring bias. Extracts: ·        “Although LLMs surpass humans in standard benchmarks, their psychological traits remain understudied despite their growing importance” ·        “The anchoring effect is a ubiquitous cognitive bias (Furnham and Boo, 2011) and influences decisions in many fields” ·        “Under uncertainty,… Continue reading An Empirical Study of the Anchoring Effect in LLMs: Existence, Mechanism, and Potential Mitigations

Rail suicide: A systematic review using systems thinking

This systematic review evaluated rail suicide research against the systems thinking techniques AcciMap & PreventiMap. Some extracts: ·        “In Australia, 67 suicides by train occurred across 2019–20, representing 80 % of all fatalities occurring on the railways” ·        “Rail suicide is distinct in that in addition to the person who dies by suicide [and the familiy/friends affected],… Continue reading Rail suicide: A systematic review using systems thinking

LLMs Are Not Reliable Human Proxies to Study Affordances in Data Visualizations

This was pretty interesting – it compared GPT-4o to people in extracting takeaways from visualised data. They were also interested in how well the LLM could simulate human respondents/responses. Note that the researchers are primarily interested in whether the GPT-4o model acts as a suitable proxy for human responses – they recognise there are other… Continue reading LLMs Are Not Reliable Human Proxies to Study Affordances in Data Visualizations

Systems thinking, culture of reliability and safety

Fantastic read from Nick Pidgeon on how systems approaches, Turner’s MMD, sensemaking, failure and learning intersect to create or mask ‘safety’. Can’t do it justice, so just a few extracts: ·        “By 1990, it was clear that the .. intellectual focus was less on analysing how past accidents had occurred .. and more towards .. how… Continue reading Systems thinking, culture of reliability and safety

Building Resilience into Safety Management Systems: Precursors and Controls to Reduce Serious Injuries and Fatalities (SIFs)

Another on SIF prevention. This (interim) report (another from the recent compendium – see comments for link) covers the findings from a few activities, including two SIF workshops about ID, implementing and monitoring critical controls for SIF hazards, and the role of human and org factors. Too much to cover, so a few extracts: ·        “the… Continue reading Building Resilience into Safety Management Systems: Precursors and Controls to Reduce Serious Injuries and Fatalities (SIFs)