BEWARE OF BOTSHIT: HOW TO MANAGE THE EPISTEMIC RISKSOF GENERATIVE CHATBOTS

Really interesting discussion paper on the premise of ‘botshit’: the AI version of bullshit. I can’t do this paper justice – it’s 16 pages, so I can only cover a few extracts. Recommend reading the full paper. Tl;dr: generative chatbots predict responses rather than knowing the meaning of their responses, and hence, “produce coherent-sounding but… Continue reading BEWARE OF BOTSHIT: HOW TO MANAGE THE EPISTEMIC RISKSOF GENERATIVE CHATBOTS

How generative AI reshapes construction and built environment: The good, the bad, and the ugly

This paper discusses some of the good, bad and ugly of GenAI use in construction. GenAI “poised to fundamentally transform the Construction and Built Environment (CBE) industry” but also is a “dual-edged sword, offering immense benefits while simultaneously posing considerable difficulties and potential pitfalls” Not a summary – just a few extracts: The Good: ·        GenAI… Continue reading How generative AI reshapes construction and built environment: The good, the bad, and the ugly

Large language models powered system safety assessment: applying STPA and FRAM

An AI, STPA and FRAM walk into a bar…ok, that’s all I’ve got. This study used ChatGPT-4o and Gemini to apply STPA and FRAM to analyse:   “liquid hydrogen (LH2) aircraft refuelling process, which is not a well- known process, that presents unique challenges in hazard identification”. One of several studies applying LLMs to safety… Continue reading Large language models powered system safety assessment: applying STPA and FRAM

Call Me A Jerk: Persuading AI to Comply with Objectionable Requests

Can LLMs be persuaded to act like d*cks? A really interesting study from Meincke et al. found human persuasion techniques also worked on LLMs. They tested how “classic persuasion principles like authority, commitment, and unity can dramatically increase an AI’s likelihood to comply with requests they are designed to refuse”. I’m drawing from their study… Continue reading Call Me A Jerk: Persuading AI to Comply with Objectionable Requests

Bullshit vs Botshit: what’s the difference?

A couple more extracts from Hannigan et al.’s paper on ‘botshit. Bullshit is “Human-generated content that has no regard for the truth, which a human then uses in communication and decision-making tasks”. Botshit is “Chatbot generated content that is not grounded in truth (i.e., hallucinations) and is then uncritically used by a human in communication… Continue reading Bullshit vs Botshit: what’s the difference?

AI, bullshitting and botshit

“LLMs are great at mimicry and bad at facts, making them a beguiling and amoral technology for bullshitting” From a paper about ‘botshit’ – summary in a couple of weeks. Source: Hannigan, T. R., McCarthy, I. P., & Spicer, A. (2024). Beware of botshit: How to manage the epistemic risks of generative chatbots. Business Horizons, 67(5), 471-486.… Continue reading AI, bullshitting and botshit

Mind the Gaps: How AI Shortcomings and Human Concerns May Disrupt Team Cognition in Human-AI Teams (HATs)

This study explored the integration and hesitations of AI embedded within human teams (Human-AI Teams, HATs). 30 professionals were interviewed. Not a summary, but some extracts: ·        “As AI takes on more complex roles in the workplace, it is increasingly expected to act as a teammate rather than just a tool” ·        HATs “must develop a shared… Continue reading Mind the Gaps: How AI Shortcomings and Human Concerns May Disrupt Team Cognition in Human-AI Teams (HATs)

How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?

This study explored when and how Large Language Models (LLMs) expose the human user to biased content, and quantified the extent of biased information. E.g. they fed the LLMs prompts and asked it to summarise, and then compared how the LLMs changed the content, context, hallucinated, or changed the sentiment. Providing context: ·         LLMs “are… Continue reading How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?

Cut the crap: a critical response to “ChatGPT is bullshit”

Here’s a critical response paper to yesterday’s “ChatGPT is bullshit” article from Hicks et al. Links to both articles below. Some core arguments: ·        Hick’s characterises LLMs as bullshitters, since LLMs “”cannot themselves be concerned with truth,” and thus “everything they produce is bullshit” ·        Hicks et al. rejects anthropomorphic terms such as hallucination or confabulation, since… Continue reading Cut the crap: a critical response to “ChatGPT is bullshit”

ChatGPT is bullshit

This paper challenges the label of AI hallucinations – arguing instead that these falsehoods better represent bullshit. That is, bullshit, in the Frankfurtian sense (‘On Bullshit’ published in 2005), the models are “in an important way indifferent to the truth of their outputs”. This isn’t BS in the sense of junk data or analysis, but… Continue reading ChatGPT is bullshit