Safe As: Can AI make your doctors worse at their job?

Can #AI make your doctor worse at their job? This multicentre study compared physician ADR (Adenoma Detection Rate) before and after AI-assisted detection – and then after removing the AI-assistance. What do you think – will the overall benefits of AI overweigh the negative and unintended skill drops of people? (*** Please subscribe, like and… Continue reading Safe As: Can AI make your doctors worse at their job?

AI: Structural vs Algorithmic Hallucinations

#AI: Structural vs Algorithmic Hallucinations There’s several typologies that have sorted different types of hallucinations – this is just one I recently saw. This suggests that structural hallucinations are an inherent part of the mathematical and logical structure of the #LLM, and not a glitch or bad prompt. LLMs are probabilistic engines, with no understanding… Continue reading AI: Structural vs Algorithmic Hallucinations

AI deception: A survey of examples, risks, and potential solutions

This study explored how “a range of current AI systems have learned how to deceive humans”. Extracts: ·        “One part of the problem is inaccurate AI systems, such as chatbots whose confabulations are often assumed to be truthful by unsuspecting users” ·        “It is difficult to talk about deception in AI systems without psychologizing them. In humans,… Continue reading AI deception: A survey of examples, risks, and potential solutions

Agentic Misalignment: How LLMs could be insider threats (Anthropic research)

AI and malicious compliance. This research from Anthropic has done the rounds, but quite interesting. In controlled experiments (not real-world applications), they found that AI models could resort to “malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors”. Some extracts:… Continue reading Agentic Misalignment: How LLMs could be insider threats (Anthropic research)

Large Language Models in Lung Cancer: Systematic Review

This systematic review of 28 studies explored the application of LLMs for lung cancer care and management. Probably few surprises here. And it’s focused mostly on LLMs, rather than specialised AI models. Extracts: ·        The review identified 7 primary application domains of LLMs in LC: auxiliary diagnosis, information extraction, question answering, scientific research, medical education, nursing… Continue reading Large Language Models in Lung Cancer: Systematic Review

From transcript to insights: Summarizing safety culture interviews with LLMs

From transcript to insights: summarizing safety culture interviews with LLMs How well does OpenAI o1 work for summarising ‘safety culture’ interviews, and how does it compare to human notes? This study did just that. Extracts: ·        They assessed correctness via exhaustiveness (comparison of LLM claims vs human interviewer notes), consistency (comparison of LLM claims between subsequent… Continue reading From transcript to insights: Summarizing safety culture interviews with LLMs

Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition

How does AI use in training improve, or impact, skill acquisition? This study manipulated training protocols with varying levels of AI decision-making automation, among 102 participants during a quality control task. Extracts: ·        “Partial automation led to the most positive outcomes” ·        “Participants who were trained with the fully automated version of the AIEDS had a significantly… Continue reading Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition

Safe As 33: Is ChatGPT bullsh** you? How Large Language models aim to be convincing rather than truthful

Large Language Models, like ChatGPT have amazing capabilities. But are their responses, aiming to be convincing human text, more indicative of BS? That is, responses that are indifferent to the truth? If they are, what are the practical implications? Today’s paper is: Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and… Continue reading Safe As 33: Is ChatGPT bullsh** you? How Large Language models aim to be convincing rather than truthful

Can chatbots provide more social connection than humans?

Can chatbots provide more social connection than humans? Possibly, providing that they don’t “claim too much humanity”. Three study protocols with 801, 201 and 401 had participants engage with AI social chatbots. They note that the long-term consequences of social chatbot use is unknown, but is important to study since “hundreds of millions of people… Continue reading Can chatbots provide more social connection than humans?

Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil’s Advocate

Can LLM’s effectively play as devil’s advocate, enhancing group decisions? Something I’ve been working on lately is AI as a co-agent for cognitive diversity / requisite imagination. Here’s a study which explored an LLM as a devil’s advocate, and I’ll post another study next week on AI and red teaming. [Though this study relied on… Continue reading Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil’s Advocate