Cut the crap: a critical response to “ChatGPT is bullshit”

Here’s a critical response paper to yesterday’s “ChatGPT is bullshit” article from Hicks et al.

Links to both articles below.

Some core arguments:

·        Hick’s characterises LLMs as bullshitters, since LLMs “”cannot themselves be concerned with truth,” and thus “everything they produce is bullshit”

·        Hicks et al. rejects anthropomorphic terms such as hallucination or confabulation, since in view these risk conflating LLMs with human psychological capabilities

·        The current authors critique Frankfurt’s ‘bullshit’ concept as indifference to the truth: in their view, linking LLMs as to bullshit also “risks misrepresenting LLMs”

·        For them, an LLMs indifference to the truth is different to human indifference, where a person “could be concerned about the truth” but consciously chooses not to be

·        They also challenge the claim that “everything ChatGPT and other LLMs output is bullshit”, saying it’s misleading

·        This is because LLMs can produce content that is “accurate” , “comprehensive” “insightful” and “original”

·        They’re critical of Hick’s abandonment of anthropomorphic terms like hallucination and confabulation because if objectivity is the aim, then applying the label of bullshit to LLMs “seems to encounter the same…problem”

·        In these authors’ views, anthropomorphic terms “can be reasonable and perhaps informative even if it is metaphorical or analogical”

·        Therefore, these metaphors aren’t intended to carry “precisely the same meanings when applied to machines as when applied to humans”, but instead serve “useful ways of describing technology” like “machine learning”

·        They think fabrication is an apt metaphor of LLM behaviour, applicable to when LLMs provide “false or inaccurate response[s]” or are prompted to generate “various forms of fiction”

·        They also like hallucination, since it connotates “wildly unexpected kinds of content” that have a “storied history in the field of AI”

·        Likewise, they value confabulation as a metaphor for when LLMs provide “plausible-sounding fictional reasons” for a factual statement

·        Labelling LLMs as bullshitters or bullshit machines could “undermine a more practical and balanced understanding” of the technology

·        And these exaggerated warnings could “induce the public to underestimate” LLMs’ abilities or prove “educationally and practically counterproductive”

Ref: Gunkel, D., & Coghlan, S. (2025). Cut the crap: a critical response to “ChatGPT is bullshit”. Ethics and Information Technology, 27(2), 23.

This image has an empty alt attribute; its file name is buy-me-a-coffee-3.png

Shout me a coffee (one-off or monthly recurring)

Study link: https://link.springer.com/content/pdf/10.1007/s10676-025-09828-3.pdf

ChatGPT is bullshit: https://www.linkedin.com/pulse/chatgpt-bullshit-ben-hutchinson-4cg5c

LinkedIn post: https://www.linkedin.com/posts/benhutchinson2_risk-ai-llm-activity-7348518346671181825-rADH?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAeWwekBvsvDLB8o-zfeeLOQ66VbGXbOpJU

Leave a comment