Safer Systems: People Training or System Tuning?

Hollnagel discusses the role of training in complex systems. Shared under open access licence. PS. Check out my YouTube channel: Safe As: A thrifty analysis of safety, AI and risk – YouTube Extracts:·        “Safety is usually seen as a problem when it is absent rather than when it is present, where accidents, incidents, and the like… Continue reading Safer Systems: People Training or System Tuning?

Generative artificial intelligence adoption and managerial well-being in construction organizations

Can GenAI improve workplace well-being and work-life balance? Perhaps. This survey study of 261 construction industry managerial professionals in Nigeria provided insights. PS. Check out my YouTube channel – link in comments. Findings: ·        “findings reveal a strong and positive association between GenAI adoption and both dimensions of managerial well-being” ·        “Predictive scheduling, documentation automation, and design… Continue reading Generative artificial intelligence adoption and managerial well-being in construction organizations

Women’s well-being in construction: a systematic review of hazards and primary preventive measures

This systematic review unpacked 62 studies looking at women’s well-being in construction, and key hazards and prevention measures. ** Check out my YouTube channel: https://www.youtube.com/@safe_as_pod Background: ·        “construction industry is often shaped by a macho work culture marked by aggression, bullying, and job insecurity due to the nomadic and cyclical nature of projects” ·        “women face not… Continue reading Women’s well-being in construction: a systematic review of hazards and primary preventive measures

Improving Construction Site Safety with Large Language Models: A Performance Analysis

This preliminary, proof of concept study explored the effectiveness of GPT-4o in construction visual hazard recognition. They contrasted performance against OHS experts. Source was **static images** from Google and real construction sites (not real-time video analysis). The LLM & experts were asked to rate the hazard, justify their judgement, and assess the immediate issues, use… Continue reading Improving Construction Site Safety with Large Language Models: A Performance Analysis

Safe As: Can AI make your doctors worse at their job?

Can #AI make your doctor worse at their job? This multicentre study compared physician ADR (Adenoma Detection Rate) before and after AI-assisted detection – and then after removing the AI-assistance. What do you think – will the overall benefits of AI overweigh the negative and unintended skill drops of people? (*** Please subscribe, like and… Continue reading Safe As: Can AI make your doctors worse at their job?

Knowledge in the head vs the world: And how to design for cognition. Norman – Design of Everyday Things

Here Don Norman discusses knowledge in the head vs knowledge in the world – from The Design of Everyday Things. Extracts:·    “Every day we are confronted by numerous objects, devices, and services, each of which requires us to behave or act in some particular manner. Overall, we manage quite well” ·    “Our knowledge is often quite incomplete,… Continue reading Knowledge in the head vs the world: And how to design for cognition. Norman – Design of Everyday Things

The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles

This study explored how AI literacy can amplify biases when evaluating AI-generated news based on their content type (data-driven vs emotional). Extracts: ·        “Higher AI literacy can intensify opposing biases. When individuals better understand the use of AI in creating data-driven articles, they exhibit automation bias… Conversely, when AI generates opinion- or emotion-based articles, high literacy… Continue reading The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles

Large Language Models in Lung Cancer: Systematic Review

This systematic review of 28 studies explored the application of LLMs for lung cancer care and management. Probably few surprises here. And it’s focused mostly on LLMs, rather than specialised AI models. Extracts: ·        The review identified 7 primary application domains of LLMs in LC: auxiliary diagnosis, information extraction, question answering, scientific research, medical education, nursing… Continue reading Large Language Models in Lung Cancer: Systematic Review

Safe As week in review: Ineffectiveness of individual mental health interventions / Fatigue risk via defences in depth / AI LLMs are BS’ing you

Safe As covered this week: 31: Do individual mental health interventions work? Maybe not. Do individual level mental health interventions, like personal resilience training, yoga, fruit bowls and training actually improve measures of mental health? This study suggests not. Using survey data from >46k UK workers, it was found that workers who participated in individual-level… Continue reading Safe As week in review: Ineffectiveness of individual mental health interventions / Fatigue risk via defences in depth / AI LLMs are BS’ing you

Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition

How does AI use in training improve, or impact, skill acquisition? This study manipulated training protocols with varying levels of AI decision-making automation, among 102 participants during a quality control task. Extracts: ·        “Partial automation led to the most positive outcomes” ·        “Participants who were trained with the fully automated version of the AIEDS had a significantly… Continue reading Practice With Less AI Makes Perfect: Partially Automated AI During Training Leads to Better Worker Motivation, Engagement, and Skill Acquisition