This study evaluated whether LLMs can support a scaled and systematic analysis of surveyed data about worker adaptive practices, to foster weak signal ID. E.g. can LLMs help identify weak signals from large-scale data. In this case, textual data describing frontline personnel adaptive behaviours during everyday operations. This was obtained via survey. PS. Check out… Continue reading Tuning into whispered frequencies: Harnessing Large Language Models to detect Weak Signals in complex socio-technical systems
Tag: chatgpt
How is AI changing the nature of work?
What can we learn about how AI changes work based on >180k GitHub users? Turns out, quite a lot. AI induces people to shift task allocation towards their core work and away from non-core, project management activities. #ai #chatgpt #github #work #llm #artificialintelligence
Safer Systems: People Training or System Tuning?
Hollnagel discusses the role of training in complex systems. Shared under open access licence. PS. Check out my YouTube channel: Safe As: A thrifty analysis of safety, AI and risk – YouTube Extracts:· “Safety is usually seen as a problem when it is absent rather than when it is present, where accidents, incidents, and the like… Continue reading Safer Systems: People Training or System Tuning?
Does using AI make us smarter, or just more confident?
Does using AI make us smarter, or just more confident? This ep covers a recent study on how generative AI affects our “metacognition” – our ability to judge our own performance. Researchers tracked hundreds of people solving logical reasoning problems with and without AI.
Generative artificial intelligence adoption and managerial well-being in construction organizations
Can GenAI improve workplace well-being and work-life balance? Perhaps. This survey study of 261 construction industry managerial professionals in Nigeria provided insights. PS. Check out my YouTube channel – link in comments. Findings: · “findings reveal a strong and positive association between GenAI adoption and both dimensions of managerial well-being” · “Predictive scheduling, documentation automation, and design… Continue reading Generative artificial intelligence adoption and managerial well-being in construction organizations
Improving Construction Site Safety with Large Language Models: A Performance Analysis
This preliminary, proof of concept study explored the effectiveness of GPT-4o in construction visual hazard recognition. They contrasted performance against OHS experts. Source was **static images** from Google and real construction sites (not real-time video analysis). The LLM & experts were asked to rate the hazard, justify their judgement, and assess the immediate issues, use… Continue reading Improving Construction Site Safety with Large Language Models: A Performance Analysis
Safe As: Can AI make your doctors worse at their job?
Can #AI make your doctor worse at their job? This multicentre study compared physician ADR (Adenoma Detection Rate) before and after AI-assisted detection – and then after removing the AI-assistance. What do you think – will the overall benefits of AI overweigh the negative and unintended skill drops of people? (*** Please subscribe, like and… Continue reading Safe As: Can AI make your doctors worse at their job?
AI deception: A survey of examples, risks, and potential solutions
This study explored how “a range of current AI systems have learned how to deceive humans”. Extracts: · “One part of the problem is inaccurate AI systems, such as chatbots whose confabulations are often assumed to be truthful by unsuspecting users” · “It is difficult to talk about deception in AI systems without psychologizing them. In humans,… Continue reading AI deception: A survey of examples, risks, and potential solutions
The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles
This study explored how AI literacy can amplify biases when evaluating AI-generated news based on their content type (data-driven vs emotional). Extracts: · “Higher AI literacy can intensify opposing biases. When individuals better understand the use of AI in creating data-driven articles, they exhibit automation bias… Conversely, when AI generates opinion- or emotion-based articles, high literacy… Continue reading The literacy paradox: How AI literacy amplifies biases in evaluating AI-generated news articles
Large Language Models in Lung Cancer: Systematic Review
This systematic review of 28 studies explored the application of LLMs for lung cancer care and management. Probably few surprises here. And it’s focused mostly on LLMs, rather than specialised AI models. Extracts: · The review identified 7 primary application domains of LLMs in LC: auxiliary diagnosis, information extraction, question answering, scientific research, medical education, nursing… Continue reading Large Language Models in Lung Cancer: Systematic Review