Safer Systems: People Training or System Tuning?

Hollnagel discusses the role of training in complex systems. Shared under open access licence. PS. Check out my YouTube channel: Safe As: A thrifty analysis of safety, AI and risk – YouTube Extracts:·        “Safety is usually seen as a problem when it is absent rather than when it is present, where accidents, incidents, and the like… Continue reading Safer Systems: People Training or System Tuning?

Does using AI make us smarter, or just more confident?

Does using AI make us smarter, or just more confident? This ep covers a recent study on how generative AI affects our “metacognition” – our ability to judge our own performance. Researchers tracked hundreds of people solving logical reasoning problems with and without AI.

Generative artificial intelligence adoption and managerial well-being in construction organizations

Can GenAI improve workplace well-being and work-life balance? Perhaps. This survey study of 261 construction industry managerial professionals in Nigeria provided insights. PS. Check out my YouTube channel – link in comments. Findings: ·        “findings reveal a strong and positive association between GenAI adoption and both dimensions of managerial well-being” ·        “Predictive scheduling, documentation automation, and design… Continue reading Generative artificial intelligence adoption and managerial well-being in construction organizations

Improving Construction Site Safety with Large Language Models: A Performance Analysis

This preliminary, proof of concept study explored the effectiveness of GPT-4o in construction visual hazard recognition. They contrasted performance against OHS experts. Source was **static images** from Google and real construction sites (not real-time video analysis). The LLM & experts were asked to rate the hazard, justify their judgement, and assess the immediate issues, use… Continue reading Improving Construction Site Safety with Large Language Models: A Performance Analysis

Safe As: Can AI make your doctors worse at their job?

Can #AI make your doctor worse at their job? This multicentre study compared physician ADR (Adenoma Detection Rate) before and after AI-assisted detection – and then after removing the AI-assistance. What do you think – will the overall benefits of AI overweigh the negative and unintended skill drops of people? (*** Please subscribe, like and… Continue reading Safe As: Can AI make your doctors worse at their job?

AI: Structural vs Algorithmic Hallucinations

#AI: Structural vs Algorithmic Hallucinations There’s several typologies that have sorted different types of hallucinations – this is just one I recently saw. This suggests that structural hallucinations are an inherent part of the mathematical and logical structure of the #LLM, and not a glitch or bad prompt. LLMs are probabilistic engines, with no understanding… Continue reading AI: Structural vs Algorithmic Hallucinations

AI deception: A survey of examples, risks, and potential solutions

This study explored how “a range of current AI systems have learned how to deceive humans”. Extracts: ·        “One part of the problem is inaccurate AI systems, such as chatbots whose confabulations are often assumed to be truthful by unsuspecting users” ·        “It is difficult to talk about deception in AI systems without psychologizing them. In humans,… Continue reading AI deception: A survey of examples, risks, and potential solutions

Agentic Misalignment: How LLMs could be insider threats (Anthropic research)

AI and malicious compliance. This research from Anthropic has done the rounds, but quite interesting. In controlled experiments (not real-world applications), they found that AI models could resort to “malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors”. Some extracts:… Continue reading Agentic Misalignment: How LLMs could be insider threats (Anthropic research)

Knowledge in the head vs the world: And how to design for cognition. Norman – Design of Everyday Things

Here Don Norman discusses knowledge in the head vs knowledge in the world – from The Design of Everyday Things. Extracts:·    “Every day we are confronted by numerous objects, devices, and services, each of which requires us to behave or act in some particular manner. Overall, we manage quite well” ·    “Our knowledge is often quite incomplete,… Continue reading Knowledge in the head vs the world: And how to design for cognition. Norman – Design of Everyday Things

The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles

This study explored how AI literacy can amplify biases when evaluating AI-generated news based on their content type (data-driven vs emotional). Extracts: ·        “Higher AI literacy can intensify opposing biases. When individuals better understand the use of AI in creating data-driven articles, they exhibit automation bias… Conversely, when AI generates opinion- or emotion-based articles, high literacy… Continue reading The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles