
This discussion paper explored the introduction of AI systems into healthcare.
It covers A LOT of ground, so just a few extracts.
Extracts:
· “This article advocates an ‘embrace with caution’ stance, calling for reflexive governance, heightened ethical oversight, and a nuanced appreciation of systemic complexity to harness generative AI’s benefits while preserving the integrity of healthcare delivery”
· “Knowledge about how healthcare professionals are using ChatGPT in their practice is still limited, and the broader application of GAI in healthcare remains largely unexplored”
· Some work has suggested that ChatGPT can “support logistics and manage patient records, thereby freeing up time for direct patient care”, and helping with choosing and planning procedures
· “However, concerns remain regarding AI hallucinations and the need for human oversight”
· “when healthcare professionals begin integrating ChatGPT into their practice, they are not simply adopting a new tool; they are also navigating the introduction of a disruptive technology into a complex adaptive system. This integration requires rethinking workflows, professional roles, and the nature of patient-provider interactions”
· “One study demonstrated that integrating ChatGPT into nursing information systems reduced documentation time from 15 to 5 minutes per patient without compromising record quality”
· While AI can improve some process flows, it “may also alter the role of healthcare professionals from that of active diagnosticians to verifiers of AI-generated recommendations” [** Bainbridge warned of this decades ago with automation]
· AI have been shown to have an “inability to reference accurate sources [and] may result in the promotion of alternative therapies over conventional treatments, which may mislead patients and delay proper diagnosis”
· “Oviedo-Trespalacios et al. (2023) showed how ChatGPT’s confident yet often misleading responses can cause patients to alter their treatment plans without consulting a physician”
· “The study underscores how AI-generated advice – despite being persuasive – can lack accuracy, contributing to misdiagnosis and potentially harmful health decisions”
· However, some studies “have demonstrated ChatGPT’s capacity for explaining rare diseases and medical conditions while offering medication recommendations for common issues such as depression”
· But it’s use in healthcare “introduces significant ethical and clinical challenge”
· Trust is critical in healthcare, and AI changes this dynamic between carers and patients by being “both a source of support and a potential disruptor of trust”

· Bias is widespread in AI models, where “studies have found that some AI-driven diagnostic tools struggle to detect medical conditions in Hispanic women, while mental health assessment models frequently overlook signs of psychological distress in non-native language speakers”
· When used in educational settings, like with students, research has found “that while ChatGPT-4 demonstrated strong analytical and problem-solving skills – outperforming undergraduate students in critical thinking assessments – it struggled with complex inferential reasoning and exhibited limitations in creative problem-solving”
· Hence, “reinforcing the need for human oversight in high-stakes learning environments” and ensuring a human-in-the-loop approach
· For some, AI applications must stress “clearer regulatory frameworks [and] transparency and explainability in AI-driven learning environments”
· And human judgement and critical thinking is always essential, where “‘Technology should not replace human judgment and expertise”
· Universities should not only train clinicians on how to use AI, but “also provide ethical training on its limitations and potential biases, fostering a balanced and informed approach”
· Hence, “Ensuring a Human-in-the-Loop approach is crucial in healthcare education, where AI should function as a complementary tool rather than an authoritative source”
· Incorporation of AI should “aim to enhance, rather than disrupt, the delicate balance of healthcare systems”
· Healthcare is a complex adaptive system, being non-linear, emergent and interdependent, meaning well-intentioned things, like AI, “can trigger unpredictable consequences”
· And unlike some other tools that integrate within existing workflows, AI “actively reshapes the system in which it operates, influencing professional roles, decision-making hierarchies, and institutional structures”
· Problematically, complex adaptive systems “cannot be fully governed by static regulatory models, as their adaptive nature requires iterative, flexible governance structures”
· And, hence “AI, as an emergent actor in this system, necessitates a similar reflexive approach, by continuously adjusting to evolving interactions across professionals, patients, and institutions”
· “Imposing rigid, pre-emptive regulations entails the risk of constraining AI’s adaptability, while having uncontrolled AI adoption entails the risk of destabilising critical decision-making structures”

Shout me a coffee (one-off or monthly recurring)
Study link: http://www.inderscience.com/storage/f122103511489716.pdf