This study explored when and how Large Language Models (LLMs) expose the human user to biased content, and quantified the extent of biased information. E.g. they fed the LLMs prompts and asked it to summarise, and then compared how the LLMs changed the content, context, hallucinated, or changed the sentiment. Providing context: · LLMs “are… Continue reading How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?
Tag: ai
Cut the crap: a critical response to “ChatGPT is bullshit”
Here’s a critical response paper to yesterday’s “ChatGPT is bullshit” article from Hicks et al. Links to both articles below. Some core arguments: · Hick’s characterises LLMs as bullshitters, since LLMs “”cannot themselves be concerned with truth,” and thus “everything they produce is bullshit” · Hicks et al. rejects anthropomorphic terms such as hallucination or confabulation, since… Continue reading Cut the crap: a critical response to “ChatGPT is bullshit”
Using the hierarchy of intervention effectiveness to improve the quality of recommendations developed during critical patient safety incident reviews
This study evaluated the Hierarch of Intervention Effective (HIE) for improving patient safety incident recommendations. They were namely interested in increasing the proportion of system-focused recommendations. Data came from over 16 months. Extracts: Ref: Lan, M. F., Weatherby, H., Chimonides, E., Chartier, L. B., & Pozzobon, L. D. (2025, June). Using the hierarchy of intervention… Continue reading Using the hierarchy of intervention effectiveness to improve the quality of recommendations developed during critical patient safety incident reviews
ChatGPT is bullshit
This paper challenges the label of AI hallucinations – arguing instead that these falsehoods better represent bullshit. That is, bullshit, in the Frankfurtian sense (‘On Bullshit’ published in 2005), the models are “in an important way indifferent to the truth of their outputs”. This isn’t BS in the sense of junk data or analysis, but… Continue reading ChatGPT is bullshit
Problems with Risk Matrices Using Ordinal Scales
This covers some core problems with risk matrices. It’s argued that while they’re established tools, appearing to be “authoritative, and intellectually rigorous”, this “could be just an illusion …bred by the human bias of uncertainty aversion and authority bias”. Hence, matrices have “many flaws” that can “diminish their usefulness to the point where they become… Continue reading Problems with Risk Matrices Using Ordinal Scales
ChatGPT in complex adaptive healthcare systems: embrace with caution
This discussion paper explored the introduction of AI systems into healthcare. It covers A LOT of ground, so just a few extracts. Extracts: · “This article advocates an ‘embrace with caution’ stance, calling for reflexive governance, heightened ethical oversight, and a nuanced appreciation of systemic complexity to harness generative AI’s benefits while preserving the integrity of… Continue reading ChatGPT in complex adaptive healthcare systems: embrace with caution
Ergonomics & Human factors: fade of a discipline
This commentary from de Winter and Eisma argues that Human Factors & Ergonomics (HFE) may be “losing credibility” and significance. Despite claims about being a thriving science, it’s argued that the discipline may be at risk of slowly fading because of some of these challenges. This paper had several follow-up articles and rebuttals from other… Continue reading Ergonomics & Human factors: fade of a discipline
The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership
Can adopting workplace AI technologies adversely affect employee psychological distress and depression? Yes according to this study. Online surveys of 381 employees in S.Korean companies was used. Background: · “In AI-centric environments .. AI reshapes jobs and workflows, affecting workers’ psychological health, satisfaction, commitment, and performance, as well as broader organizational outcomes” · “While AI adoption affects… Continue reading The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership
An Empirical Study of the Anchoring Effect in LLMs: Existence, Mechanism, and Potential Mitigations
This study found that several LLMs are fairly easily influenced by anchoring effects, consistent with human anchoring bias. Extracts: · “Although LLMs surpass humans in standard benchmarks, their psychological traits remain understudied despite their growing importance” · “The anchoring effect is a ubiquitous cognitive bias (Furnham and Boo, 2011) and influences decisions in many fields” · “Under uncertainty,… Continue reading An Empirical Study of the Anchoring Effect in LLMs: Existence, Mechanism, and Potential Mitigations
The issues of ‘root causes’ and infinite regression (the endless search for the causes of causes)
A really interesting, but challenging, read about the ontological status of ‘root causes’ and more pointedly, the problem of infinite regression. The author also proposes some stop rules to help navigate infinite regression. I’ve previously posted articles critical of the status of ‘root causes’, who argue it is more a process of implicit or explicit… Continue reading The issues of ‘root causes’ and infinite regression (the endless search for the causes of causes)