AI deception: A survey of examples, risks, and potential solutions

This study explored how “a range of current AI systems have learned how to deceive humans”. Extracts: ·        “One part of the problem is inaccurate AI systems, such as chatbots whose confabulations are often assumed to be truthful by unsuspecting users” ·        “It is difficult to talk about deception in AI systems without psychologizing them. In humans,… Continue reading AI deception: A survey of examples, risks, and potential solutions

Agentic Misalignment: How LLMs could be insider threats (Anthropic research)

AI and malicious compliance. This research from Anthropic has done the rounds, but quite interesting. In controlled experiments (not real-world applications), they found that AI models could resort to “malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors”. Some extracts:… Continue reading Agentic Misalignment: How LLMs could be insider threats (Anthropic research)

Knowledge in the head vs the world: And how to design for cognition. Norman – Design of Everyday Things

Here Don Norman discusses knowledge in the head vs knowledge in the world – from The Design of Everyday Things. Extracts:·    “Every day we are confronted by numerous objects, devices, and services, each of which requires us to behave or act in some particular manner. Overall, we manage quite well” ·    “Our knowledge is often quite incomplete,… Continue reading Knowledge in the head vs the world: And how to design for cognition. Norman – Design of Everyday Things

The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles

This study explored how AI literacy can amplify biases when evaluating AI-generated news based on their content type (data-driven vs emotional). Extracts: ·        “Higher AI literacy can intensify opposing biases. When individuals better understand the use of AI in creating data-driven articles, they exhibit automation bias… Conversely, when AI generates opinion- or emotion-based articles, high literacy… Continue reading The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles

Large Language Models in Lung Cancer: Systematic Review

This systematic review of 28 studies explored the application of LLMs for lung cancer care and management. Probably few surprises here. And it’s focused mostly on LLMs, rather than specialised AI models. Extracts: ·        The review identified 7 primary application domains of LLMs in LC: auxiliary diagnosis, information extraction, question answering, scientific research, medical education, nursing… Continue reading Large Language Models in Lung Cancer: Systematic Review

From transcript to insights: Summarizing safety culture interviews with LLMs

From transcript to insights: summarizing safety culture interviews with LLMs How well does OpenAI o1 work for summarising ‘safety culture’ interviews, and how does it compare to human notes? This study did just that. Extracts: ·        They assessed correctness via exhaustiveness (comparison of LLM claims vs human interviewer notes), consistency (comparison of LLM claims between subsequent… Continue reading From transcript to insights: Summarizing safety culture interviews with LLMs

Safe As week in review: Ineffectiveness of individual mental health interventions / Fatigue risk via defences in depth / AI LLMs are BS’ing you

Safe As covered this week: 31: Do individual mental health interventions work? Maybe not. Do individual level mental health interventions, like personal resilience training, yoga, fruit bowls and training actually improve measures of mental health? This study suggests not. Using survey data from >46k UK workers, it was found that workers who participated in individual-level… Continue reading Safe As week in review: Ineffectiveness of individual mental health interventions / Fatigue risk via defences in depth / AI LLMs are BS’ing you

Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition

How does AI use in training improve, or impact, skill acquisition? This study manipulated training protocols with varying levels of AI decision-making automation, among 102 participants during a quality control task. Extracts: ·        “Partial automation led to the most positive outcomes” ·        “Participants who were trained with the fully automated version of the AIEDS had a significantly… Continue reading Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition

Safe As 33: Is ChatGPT bullsh** you? How Large Language models aim to be convincing rather than truthful

Large Language Models, like ChatGPT have amazing capabilities. But are their responses, aiming to be convincing human text, more indicative of BS? That is, responses that are indifferent to the truth? If they are, what are the practical implications? Today’s paper is: Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and… Continue reading Safe As 33: Is ChatGPT bullsh** you? How Large Language models aim to be convincing rather than truthful

Can chatbots provide more social connection than humans?

Can chatbots provide more social connection than humans? Possibly, providing that they don’t “claim too much humanity”. Three study protocols with 801, 201 and 401 had participants engage with AI social chatbots. They note that the long-term consequences of social chatbot use is unknown, but is important to study since “hundreds of millions of people… Continue reading Can chatbots provide more social connection than humans?