
Many organisations rely on their root cause analyses (RCA) to help learn about incidents, and, ideally, prevent incident reoccurrences.
So the logic goes. But does the published evidence support RCA approaches as effective means for preventing incident reoccurrences?
Today’s paper is Martin-Delgado, J., Martínez-García, A., Aranaz, J. M., Valencia-Martín, J. L., & Mira, J. J. (2020). How much of root cause analysis translates into improved patient safety: a systematic review. Medical Principles and Practice, 29(6), 524-531.

Shout me a coffee (one-off or monthly recurring)
Make sure to subscribe to Safe As on Spotify/Apple, and if you find it useful then please help share the news, and leave a rating and review on your podcast app.
I also have a Safe As LinkedIn group if you want to stay up to date on releases: https://www.linkedin.com/groups/14717868/
Spotify: https://open.spotify.com/episode/17WO79whkaFo9rYpmnCc5G?si=x07B7txFSNq0k8iAwUiVYA
Transcript:
In industry, root cause analyses, or RCAs, are a widely used collection of approaches and tools. They’re meant to be a rigorous process for digging deep after an adverse event, identifying contributing factors and crafting solutions that will prevent reoccurrences. The expectation is that an RCA leads to real, tangible improvements. But what does the latest research actually show about the effectiveness of RCAs?
Are RCAs consistently delivering on their promises to make our workplaces safer, or are we sometimes just going through the motions without seeing the desired impact? G’day everyone, I’m Ben Hutchinson and this is Safe As, a podcast dedicated to the thrifty analysis of safety, risk, and performance research. Visit safetyinsights.org for more research.
Today’s study is from Martin Delgado et al. 2020, entitled “How Much of Root Cause Analysis Translates into Improved Patient Safety – A Systematic Review”. Published in Medical Principles and Practice, it’s a systematic review that explored whether root cause analyses are effective to reduce the occurrence of avoidable adverse events in healthcare, or I’ll just call them adverse events.
Twenty-one studies were included: nine of moderate quality, five of considerable quality, and seven of high quality. It’s another healthcare study as you picked up. It’s worth highlighting that RCA is an umbrella term; it’s not some monolithic tool, but rather includes many different tools and methods. Because of this study design and the overall limited evidence, they couldn’t sort the study into a comparison of the specific RCA tools.
So what did they find? Overall, they found the effectiveness of RCAs is unclear and inconsistent. But according to the paper, it’s not clear if root cause analysis is effective in preventing the recurrence of adverse events. And although early studies suggested that RCAs are effective in promoting ideas for preventing recurrence, more recent studies don’t confirm these findings. Put simply, we’re not seeing stronger or more consistent evidence that RCA processes actually help prevent future harm.
Further in the details, in only two studies could it be established that RCAs contributed to the improvement of patient care to some extent. And these two studies were themselves limited by the number of the RCAs reviewed, really weakening the evidence overall. Also, they found that in 50% of the cases, the recommendations from the RCAs were quite weak, which didn’t lead to a reduction of adverse events. So half of the time, the outputs from the RCAs aren’t even strong enough to really have any tangible improvements of the things they’re supposed to actually help improve.
They also found that the action plans or the corrective actions are really poorly designed and untested. One study found that action plans didn’t follow any controlled implementation pattern. So no link could be established between the plans and actual tangible operational improvements. Put simply, the fixes that result from these RCA processes may be too flawed or too poorly implemented to really help improve operational outcomes, and maybe they can even create some new ones.
Also, they found that most of the recommendations ignored deeper systemic issues. For instance, most of the proposed recommendations focused on active errors from people and neglected latent contributing factors. So the focus on people provides maybe short-term solutions, but really only partially helps to avoid future problems; it’s not really improving the conditions that people work in. So instead of addressing system flaws, these RCA processes, according to this research, often result in blaming frontline staff or only really shallow solutions.
They also found there was a strong lack of follow-up and verification of the improvements. So the RCA processes didn’t really require any checks to see whether the improvements were actually carried out. This disconnect undermines their usefulness; without follow-up, even good recommendations can fail.
Not surprisingly, they found that there was quite low involvement from those closest to the incident, so managers and personnel involved in the actual adverse events had low participation rates in the investigation teams. This limited not only their insight into what was actually happening, but also reduced potential psychological recovery for second victims and those emotionally affected by the incident. Put another way, leaving out frontline staff leads to weaker analysis and missed chances for healing and improving.
And they found a blame culture really discourages reporting and RCA participation. So they found cultures that were really focused on searching for a guilty party, those responsible. And this created tensions in the work environment that really challenged interprofessional relationships – their fear and lack of trust led professionals to refuse to even participate in the incident processes. Simply put, if people fear punishment, they won’t speak up.
So in conclusion, RCAs, while maybe useful for some things, they really lead to very remote and immediate fixes, but they don’t seem to be very effective for long-term or effective implementation measures to prevent the reoccurrence of incidents. And although some studies have demonstrated the usefulness of RCAs and their recommendations, most of the published studies, at least in this study in healthcare, found that just over half of the recommendations weren’t even useful enough to prevent the same incidents from recurring in the future. They were disconnected from the things that they were supposed to be focusing on and fixing. So they conclude that RCA approaches can potentially help understand some contributing factors, but they often fail to help with the fixes or fix the issues. In contrast, the fixes tend to be really weak, shallow, or even ill-suited to improve these complex environments like healthcare.
So what do we make of these findings? Well again, it’s not one tool; RCA covers a lot of ground. In any case, I think we really need to be clear about what we want the RCA processes to achieve, can achieve, should achieve, and evaluate whether our processes are configured for those results. Do we have any unfair or unrealistic expectations around what we want them to be able to achieve? Do we even have the right resources and expertise? Maybe a shake-up is needed. Do we need fewer investigations and maybe more proactive learning? Do we need better feedback and engagement channels with workers or design-level interventions? What about different teams being involved or trying different tools and approaches? And maybe instead of building a shopping list of contributing factors and detailed timelines that will benefit few, maybe spend more time on learning about daily work, the constraints, and designing in better, safer work.
Importantly, work that I covered previously from Lundberg suggested that investigations, probably like most processes in organizations, are affected by a host of socio-political factors separate to the incident itself but still significantly impact what investigations find, construct, and ultimately try and fix. Maybe to a degree, some of these constraints are just fundamental limits within organizations.
So for limitations, there were a few, but I think the main one really is there’s a relatively limited body of evidence on RCAs and what they actually change in occupational settings. Also, I’m like a broken record, but there’s also the issue with using statistically rare events like incidents; it’s a huge limitation. It’s just really difficult to connect upstream factors with downstream outcomes. However, a number of other healthcare RCA studies using different measures like corrective action quality also find widespread and systemic issues with the RCA approaches.