
This article from Dekker and Woods discusses the ‘risks of literal-minded automation’, being a “system that can’t tell if its model of the world is the world it is actually in”.
This issue manifests in automated systems being wrong, strong and silent—and while the issue has existed for at least 70 years, the risk “looms larger today as our ability to deploy increasingly autonomous systems and delegate greater authority to such systems expands”.
I’ve skipped heaps – and particularly the main worked example about the Boeing 737 MAX MCAS example.
First they highlight that “Norbert Wiener (1950) highlighted the risks of literal-minded machines—a system that can’t tell if its model of the world is the world it is actually in.”
Consequently, the “the system will do the right thing”, in the sense that its actions are appropriate given its model of the world; irrespective that its model doesn’t reflect reality.
Literal-minded automated systems have associated risks, and are linked with incidents, outages and financial losses. They provide examples like from vehicle pat h control, medication infusion incidents, plan following, stock trading, and loss of vehicle-to-vehicle separation.

When the assumed world and actual worlds don’t match, “automated systems will misbehave, taking actions that are inappropriate and possibly dangerous”.
Supervisory roles, including people, can find it difficult shifting from a monitoring into an active role during a developing non-normal or abnormal situation. Some relevant questions here, quoting the paper:
· What is the automation doing and what will it do next?
· How would you know if the automated systems are opaque?
· What will it do next?
· What is the configuration currently controlling key parameters or processes?
If the mechanisms that re-direct parts of automated suits are ‘clusmy’, then the supervisory and control mechanisms “has a built-in vulnerability to breakdowns where the automation is strong, silent and wrong”.
Strong, Silent, Difficult to Direct Automation in Aviation
The paper focuses primarily on strong, silent and difficult automated systems in aviation. This ‘adaptive pattern’ was evident during the expansion of flight deck automation in the 1980s.
Research in this area revealed the issue of ‘mode awareness’ and ‘automation surprises’. These are said to “denote a breakdown in human-machine coordination that could be traced to the new ‘strong, silent, difficult to direct’ suite of automated subsystems on the flight deck”.
Definitions:
Strong: Refers to the authority delegated to an automated system, like in aviation where a sub-system has the authority to take over control of the aircraft from the flight crew if its inputs/internal model logics call for it.
They give the example of indirect mode changes. This is where “automation changes the configuration of the automation, going beyond the specific pilot supervisory input to the automation”; which subsequently led to the discovery of mode awareness, as a contributor to automation surprises.
Silent: Refers to low observability of issues, particularly in the form and quality of feedback between human and machine agents, and how the automation is configured. For example, changes in flight path control, flight manoeuvres etc.
It’s argued, drawing on Earl Wierner’s work, is about how smoothly human supervisors can:
· Understand what automation is doing
· What it’s going to do next
· Why did it do that
· How did we get into this mode
Difficult to direct: Refers to “how smoothly the design allows the flight crew to modify automated system configuration/behavior as conditions and priorities change given the tempo of operation”. For instance, how the flightcrew and other supervisors can manage automation.

When sensor inputs to strong, silent, difficult to direct automation go bad
How literal-minded automata contribute to accidents is multi-faceted. One is revealing hidden interdependencies, like in software or sub-systems. Another element is bad inputs to automated systems operating with high authority – like aviation examples where automated systems acted on faulty sensor data or failures.
They decide a lot of the paper to the Boeing MAX 737 MCAS system. I’ve skipped most of this. One point is that MCAS operated based on input from Angle of Attack sensors, but this was just a single sensor. Normally there is redundancy with redundant sensors, but “MCAS was not part of the safety case for the modified 737 MAX; why would the extra steps for extra reliability matter?”.
They argue that another issue here is what constituted a ‘failure’ of MCAS. They say that engineering considered only the controller design and software as the MCAS system, and not including the sensors, reliability, alerts, nor even the supervisory control features as the MCAS system.
Hence, “the narrow perspective meant many claimed “the automation” didn’t fail” and that this tendency to narrow the scope, “hides the true integrated system, its complexity, overestimates reliability and underestimates the need for resilient supervisory control”.
Literal-minded automata was also present with the MCAS: it acted according to its model of the world, and this mismatched reality. For instance, in reality, aircraft altitude was normal, but to the MCAS, the aircraft was pitching up and responded vigorously.

For the pilots, the “classic automation surprise pattern” resulted, with “multiple lines of cognitive work interwoven as the tempo of operations increases, uncertainty is high, danger is increasing while opportunities for recovery are vanishing”.
A list of other issues were also present with MCAS – I’ve skipped their examples except for the point that people didn’t really “have knowledge or guidance on how to stop an automatic control system they did not know existed”; and that MCAS wasn’t described by name in manuals or training.
They argue that “the design for supervisory management of MCAS as part of a suite of automated systems for different aspects of flight was virtually non-existent” and is a “compelling exemplar” of what can happen when poor design of supervisory management combines with high authority/high autonomy automation.
Conclusion: Misbehavior of Strong, Silent, Difficult to Direct Automation is a General Risk
Wrapping up their arguments, they observe how:
“Out-of-Control Automated systems with high autonomy and high authority will misbehave when factors combine to create a gap between the internal model of the world and the actual events/ context going on in the world where the automation is deployed”.
This risk, problematically, is “inescapable and individual incidents or accidents involving misbehavior of strong, silent, difficult to direct automation occur regularly as stakeholders deploy increasingly autonomous systems with high authority in dynamic risky worlds”.
One risk with the systematic management of this challenge is the tendency to see the issue on a case by case basis, and/or left to engineering judgement. It’s said that “Organizational and financial pressures easily overwhelm engineering teams’ ability to address the risk”, which the Boeing example highlights.
They also say that stakeholders also regularly “discount the systemic and organizational lessons from these breakdowns”; called ‘distancing by differing”.
Although arguments about greater human or machine supervisory oversight on automated systems makes sense in hindsight, these “analysts operate with knowledge/time/resources unavailable to supervisors responsible for a risky system”.
Further, those supervisors, at the time, face “uncertainties, overload, and pressures after-the-fact analysts miss or underplay”.
They believe that the ‘solution’ to misbehaving automated systems and the problem of strong, silent and difficult to direct is straightforward, theoretically. Here, “More robust and resilient control is indeed possible if and only if stakeholders recognize the risk as fundamental and expand the systems engineering concepts/techniques”.
Unless this shift becomes the norm, formally and informally, “the race to deploy high authority/high autonomy systems will be accompanied by incidents/accidents driven by misbehavior of strong, silent, difficult to direct automation”.
They argue that there are also pragmatic means to achieve more resilient and less brittle system architectures, but “utilizing the knowledge requires a substantial shift at organizational levels to re-balance/re-prioritize the trade-off between maximizing short-term gains while discounting longer-term risks of autonomy”.
Ref: Dekker, S. W., & Woods, D. D. (2024). Wrong, Strong, and Silent: What Happens when Automated Systems With High Autonomy and High Authority Misbehave?. Journal of Cognitive Engineering and Decision Making, 15553434241240849.
My site with more reviews: https://safety177496371.wordpress.com
LinkedIn post: https://www.linkedin.com/pulse/wrong-strong-silent-what-happens-when-automated-high-ben-hutchinson-veqxc