AI, bullshitting and botshit

“LLMs are great at mimicry and bad at facts, making them a beguiling and amoral technology for bullshitting” From a paper about ‘botshit’ – summary in a couple of weeks. Source: Hannigan, T. R., McCarthy, I. P., & Spicer, A. (2024). Beware of botshit: How to manage the epistemic risks of generative chatbots. Business Horizons, 67(5), 471-486.… Continue reading AI, bullshitting and botshit

Mind the Gaps: How AI Shortcomings and Human Concerns May Disrupt Team Cognition in Human-AI Teams (HATs)

This study explored the integration and hesitations of AI embedded within human teams (Human-AI Teams, HATs). 30 professionals were interviewed. Not a summary, but some extracts: ·        “As AI takes on more complex roles in the workplace, it is increasingly expected to act as a teammate rather than just a tool” ·        HATs “must develop a shared… Continue reading Mind the Gaps: How AI Shortcomings and Human Concerns May Disrupt Team Cognition in Human-AI Teams (HATs)

How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?

This study explored when and how Large Language Models (LLMs) expose the human user to biased content, and quantified the extent of biased information. E.g. they fed the LLMs prompts and asked it to summarise, and then compared how the LLMs changed the content, context, hallucinated, or changed the sentiment. Providing context: ·         LLMs “are… Continue reading How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?

Cut the crap: a critical response to “ChatGPT is bullshit”

Here’s a critical response paper to yesterday’s “ChatGPT is bullshit” article from Hicks et al. Links to both articles below. Some core arguments: ·        Hick’s characterises LLMs as bullshitters, since LLMs “”cannot themselves be concerned with truth,” and thus “everything they produce is bullshit” ·        Hicks et al. rejects anthropomorphic terms such as hallucination or confabulation, since… Continue reading Cut the crap: a critical response to “ChatGPT is bullshit”

Using the hierarchy of intervention effectiveness to improve the quality of recommendations developed during critical patient safety incident reviews

This study evaluated the Hierarch of Intervention Effective (HIE) for improving patient safety incident recommendations. They were namely interested in increasing the proportion of system-focused recommendations. Data came from over 16 months. Extracts: Ref: Lan, M. F., Weatherby, H., Chimonides, E., Chartier, L. B., & Pozzobon, L. D. (2025, June). Using the hierarchy of intervention… Continue reading Using the hierarchy of intervention effectiveness to improve the quality of recommendations developed during critical patient safety incident reviews

ChatGPT is bullshit

This paper challenges the label of AI hallucinations – arguing instead that these falsehoods better represent bullshit. That is, bullshit, in the Frankfurtian sense (‘On Bullshit’ published in 2005), the models are “in an important way indifferent to the truth of their outputs”. This isn’t BS in the sense of junk data or analysis, but… Continue reading ChatGPT is bullshit

Problems with Risk Matrices Using Ordinal Scales

This covers some core problems with risk matrices. It’s argued that while they’re established tools, appearing to be “authoritative, and intellectually rigorous”, this “could be just an illusion …bred by the human bias of uncertainty aversion and authority bias”. Hence, matrices have “many flaws” that can “diminish their usefulness to the point where they become… Continue reading Problems with Risk Matrices Using Ordinal Scales

ChatGPT in complex adaptive healthcare systems: embrace with caution

This discussion paper explored the introduction of AI systems into healthcare. It covers A LOT of ground, so just a few extracts. Extracts: ·   “This article advocates an ‘embrace with caution’ stance, calling for reflexive governance, heightened ethical oversight, and a nuanced appreciation of systemic complexity to harness generative AI’s benefits while preserving the integrity of… Continue reading ChatGPT in complex adaptive healthcare systems: embrace with caution

Ergonomics & Human factors: fade of a discipline

This commentary from de Winter and Eisma argues that Human Factors & Ergonomics (HFE) may be “losing credibility” and significance. Despite claims about being a thriving science, it’s argued that the discipline may be at risk of slowly fading because of some of these challenges. This paper had several follow-up articles and rebuttals from other… Continue reading Ergonomics & Human factors: fade of a discipline

The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership

Can adopting workplace AI technologies adversely affect employee psychological distress and depression? Yes according to this study. Online surveys of 381 employees in S.Korean companies was used. Background: ·        “In AI-centric environments .. AI reshapes jobs and workflows, affecting workers’ psychological health, satisfaction, commitment, and performance, as well as broader organizational outcomes” ·        “While AI adoption affects… Continue reading The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership