AI deception: A survey of examples, risks, and potential solutions

This study explored how “a range of current AI systems have learned how to deceive humans”. Extracts: ·        “One part of the problem is inaccurate AI systems, such as chatbots whose confabulations are often assumed to be truthful by unsuspecting users” ·        “It is difficult to talk about deception in AI systems without psychologizing them. In humans,… Continue reading AI deception: A survey of examples, risks, and potential solutions

The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles

This study explored how AI literacy can amplify biases when evaluating AI-generated news based on their content type (data-driven vs emotional). Extracts: ·        “Higher AI literacy can intensify opposing biases. When individuals better understand the use of AI in creating data-driven articles, they exhibit automation bias… Conversely, when AI generates opinion- or emotion-based articles, high literacy… Continue reading The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles

Large Language Models in Lung Cancer: Systematic Review

This systematic review of 28 studies explored the application of LLMs for lung cancer care and management. Probably few surprises here. And it’s focused mostly on LLMs, rather than specialised AI models. Extracts: ·        The review identified 7 primary application domains of LLMs in LC: auxiliary diagnosis, information extraction, question answering, scientific research, medical education, nursing… Continue reading Large Language Models in Lung Cancer: Systematic Review

Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition

How does AI use in training improve, or impact, skill acquisition? This study manipulated training protocols with varying levels of AI decision-making automation, among 102 participants during a quality control task. Extracts: ·        “Partial automation led to the most positive outcomes” ·        “Participants who were trained with the fully automated version of the AIEDS had a significantly… Continue reading Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition

Safe As 33: Is ChatGPT bullsh** you? How Large Language models aim to be convincing rather than truthful

Large Language Models, like ChatGPT have amazing capabilities. But are their responses, aiming to be convincing human text, more indicative of BS? That is, responses that are indifferent to the truth? If they are, what are the practical implications? Today’s paper is: Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and… Continue reading Safe As 33: Is ChatGPT bullsh** you? How Large Language models aim to be convincing rather than truthful

Can chatbots provide more social connection than humans?

Can chatbots provide more social connection than humans? Possibly, providing that they don’t “claim too much humanity”. Three study protocols with 801, 201 and 401 had participants engage with AI social chatbots. They note that the long-term consequences of social chatbot use is unknown, but is important to study since “hundreds of millions of people… Continue reading Can chatbots provide more social connection than humans?

Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil’s Advocate

Can LLM’s effectively play as devil’s advocate, enhancing group decisions? Something I’ve been working on lately is AI as a co-agent for cognitive diversity / requisite imagination. Here’s a study which explored an LLM as a devil’s advocate, and I’ll post another study next week on AI and red teaming. [Though this study relied on… Continue reading Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil’s Advocate

The impact of generative AI on critical thinking skills: a systematic review, conceptual framework and future research directions

The impact of generative AI on critical thinking skills: a systematic review, conceptual framework and future research directions How do generative AI (GenAI) models affect critical thinking skills? This systematic review unpacked 68 studies to explore the good and the bad. GenAI are “machine-learning algorithms, usually transformer-based large-language models (LLMs), that generate new text, code… Continue reading The impact of generative AI on critical thinking skills: a systematic review, conceptual framework and future research directions

BEWARE OF BOTSHIT: HOW TO MANAGE THE EPISTEMIC RISKSOF GENERATIVE CHATBOTS

Really interesting discussion paper on the premise of ‘botshit’: the AI version of bullshit. I can’t do this paper justice – it’s 16 pages, so I can only cover a few extracts. Recommend reading the full paper. Tl;dr: generative chatbots predict responses rather than knowing the meaning of their responses, and hence, “produce coherent-sounding but… Continue reading BEWARE OF BOTSHIT: HOW TO MANAGE THE EPISTEMIC RISKSOF GENERATIVE CHATBOTS

How generative AI reshapes construction and built environment: The good, the bad, and the ugly

This paper discusses some of the good, bad and ugly of GenAI use in construction. GenAI “poised to fundamentally transform the Construction and Built Environment (CBE) industry” but also is a “dual-edged sword, offering immense benefits while simultaneously posing considerable difficulties and potential pitfalls” Not a summary – just a few extracts: The Good: ·        GenAI… Continue reading How generative AI reshapes construction and built environment: The good, the bad, and the ugly