E17: Critical Decisions & Local Rationality: Tools for making sense of situations

Why did they do that, what an idiot! What if our inability to understand the apparent stupidity of an action, after the fact, is more an issue with us, than with the decisions or actions of the person you’re judging?

What are better ways–specific tools–to unpack the critical decisions and actions, and make sense of the local rationality of people caught within these situations?

Today’s pod unpacks two articles:

  1. Klein, G. A., Calderwood, R., & Macgregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on systems, man, and cybernetics19(3), 462-472.
  2. Roe, L. Why it made sense at the time: Local rationality questions for healthcare investigations.

Make sure to subscribe to Safe As on Spotify/Apple, and if you find it useful then please help share the news, and leave a rating and review on your podcast app.

I also have a Safe As LinkedIn group if you want to stay up to date on releases: https://www.linkedin.com/groups/14717868/

This image has an empty alt attribute; its file name is buy-me-a-coffee-3.png

Shout me a coffee (one-off or monthly recurring)

Spotify: https://open.spotify.com/episode/4nM60cEwZytdAV0VxijXNB?si=m4Xr-ZmVSS-6hTcJp5zXIA

Apple: https://podcasts.apple.com/us/podcast/e17-critical-decisions-local-rationality-tools-for/id1819811788?i=1000720486342

Transcript:

Stop blaming and start understanding. Do you commonly lament that the actions of our operators didn’t make sense? And what were they thinking? If so, then maybe that’s a “you” issue, because you’re not properly framing why things made sense to that person at the time. Not providing the right inquisitive framework to contextualize their decisions. Let’s explore two tools that may help pull back the curtain on tough decisions.

Good day everyone. I’m Ben Hutchinson, and this is Safe As, a podcast dedicated to the thrifty analysis of safety, risk and performance research. Visit safetyinsights.org for more research. Today’s pod includes two papers, two different techniques, but with similar goals. The first is on the critical decision method, or CDM, by a client at L , titled “Critical Decision Method for Eliciting Knowledge”. Published in the IEEE Transactions on Systems, Man, and Cybernetics. The second paper is from Lois Rowe, titled “Local Rationality Question Tool”, understanding why it made sense at the time, published in Contemporary Economics and Human Factors .

Now, onto CDM, the critical decision method. A lot has been written on CDM. I’m focusing on an earlier paper from Kline, and particularly focusing on the tool itself and the questions, rather than the process of running a CDM inquiry. If you’re interested in the background and the actual process of running it, I’d suggest reading up on some of these other sources, because I’m really diving into more specific questions, because I think they’re really interesting for framing, a better way to think about performance.

CDM is a semi-structured retrospective interview technique, primarily used in the naturalistic decision-making paradigm to research professional decision-making. Now, that’s what was sort of hinted at in the original paper, but since then, since , it’s been used very widely outside of NDM. It’s particularly valuable for eliciting expert knowledge, decision strategies, and the cues that experts pick up on, or may not even realise they’ve picked up on.

So this NDM, naturalistic decision-making context, focuses on real-world problems, often involving time pressures, high stakes, ill-structured problems, uncertain dynamic environments, or shifting competing goals. So the CDM tool really supports that type of research focus, but again, it’s used so widely in practice now.

Very quickly, before I dive into the questions, the CDM interview involves an expert recounting a single selected, non-routine, professionally challenging incident where they were the main decision-maker. That’s just a standard approach, but it’s not the only approach. The interview typically lasts from somewhere up to an hour to two hours, with the facilitator and the expert reviewing the incident multiple times to add details and gain a deeper understanding. Probing questions are used to facilitate information retrieval. According to the authors, the CDM interview generally follows these steps. So they select a non-routine incident that was challenging and where the decision-maker might have differed from someone with less experience. The goal is to identify cases that illustrate unique challenges and reveal aspects of true expertise, really applied expertise. The participant provides a brief description of the incident from the beginning to end, typically without interruption, to establish context for the interviewer and to activate the participant’s memory. After that initial account, the interviewer reconstructs the incident chronologically, identifying key events, actions taken and information acquired. This helps establish the basic understanding from the participant’s perspective of the event and allows for corrections or filling in missing details.

The next few steps, which I’m really going to gloss over, involves pointing out key decision points, points where judgments affected the outcome are identified for further probing, and it allows a real deepening of what if enquiries into these decision points. So let’s jump into the actual probes used during or the tool itself for the critical decision point interview probes.

The first is on cues and this involves things like what were you seeing, hearing, smelling, involves knowledge, what information did you use in making this decision and how was it obtained, analogies, were you reminded of any previous experience, goals, what were your specific goals at the time, options, what other causes of action were considered or even available to you. The basis, how was this option selected or other options rejected, what rule or norms were being followed, experience, what specific training or experience was necessary or helpful in making this decision, aiding, if that decision wasn’t the best or the most optimal, what training knowledge or information could have helped, time pressure, how much time pressure was involved in making this decision. Situation assessment, imagine that you were asked to describe the situation to a relief officer at this point, how would you summarize the situation? Hypotheticals, if a key feature of the situation had been different, what difference would it have made in your decision?

These prompts are really useful for investigations. If you want to try to get into a person’s head and understand why something made sense to them, picking up on such cues, decision points, what information was even available to them and building on alternative hypotheticals are really effective, this deepening process. If you keep having lockout tagout issues or striking assets which were apparently known and identified, then maybe considering these types of questions would be really useful.

Next, we explore the local rationality question tool from Louise Rowe. While the CDM structured approach helps in analyzing retrospective data, but doesn’t have to, but generally is. The local rationality question tool, developed by Louise Rowe for healthcare safety investigations, provides a specific framework of questions to understand why an action or inaction made sense at the time from the perspective of the staff involved, without leading to feelings of blame or interrogation. Likewise to the CDM as before, I’m focusing more on the questions in the tool rather than the process of using this tool.

This tool aligns with the concept of local rationality, which emphasizes understanding a situation from the decision maker’s mindset and their knowledge, demands, goals and available context at the time. You may have heard of this concept called “bounded rationality” from Herbert Simon. Some argue that they’re different, others more or less argue they’re the same. The local rationality question tool was developed through a literature review of safety science research, adapting existing questions to ensure they explored local rationality while prioritizing psychological safety. It compiles a collection of questions categorized for easy navigation.

Now, there’s like prompts in this tool, so I can’t cover them all. But the key categories and example questions include the situation. This is questions to explore the dynamic elements of the work environment and how the situation unfolded, like, describe to me what was happening at the time. If you had to describe the situation to your colleague at this point, what would have you told? Can you tell me about any time pressure to complete the task or any limitations on what you were able to do at the time? Tell me about any reassessments of the situation that might have occurred. Was there any immediate feedback? Any careful monitoring? Can you break down the task into three to six steps? Over these steps, tell me a bit about those that require assessment or decision making or problem solving, if any. And this could involve coming up with a task diagram for very complex procedures or situations.

Next is on thoughts and decision making, questions to understand the individual’s focus, goals and information that were utilized. As in, describe what you are focusing on. Can you explain what your aim or the goal was during this time? Tell me about what info you used to help you make this decision. How did you obtain this information? Tell me about any barriers you had obtaining this information. Can you tell me about any previous experience you had in similar situations? Were there any other options available at the time that probably should have been evaluated in hindsight? Prepare this was another key category. So this was questions concerning training, guidelines and other knowledge sources. It’s enacted with questions like, can you tell me a bit about the training that you or the staff had to deal with this situation? What training knowledge or information might have helped? Tell me what guidelines or policies there are in your unit to help manage the situation.

Another category is communication. Questions exploring team interaction during the incident. Describe how the team communicated to each other during this time. Another category is anticipation and thinking ahead. Questions designed to understand expectations and imagine consequences, like, describe what you were expecting to happen. Can you explain to me what you were hoping would happen as a result of, whatever. So the local rationale tool is designed for flexible use as a guide or inspiration. Rather than as a strict and mechanical script, encouraging an open-ended inquisitive and non-blame approach. It’s argued that by focusing on what made sense at the time, it helps investigators reduce hindsight bias and promote a fairer and more restorative culture. So both the CDM that I covered first with its detailed interview probes and structured analysis and the local rationality question tool with its focus on understanding local rationality are designed to gain deep insights deepening into decision making in real world often challenging situations. I highly recommend you check out both sources. I really only scratch the surface and see if they’re valuable additions to your adaptive toolboxes.

Leave a comment