This was an interesting brief discussion paper on automation transparency in the case of the tragic Boeing 737 Max 8 accidents.
Note: I’ve skipped heaps – the full paper is worth reading.
First they note that determining operator knowledge and feedback requirements in safety-critical sectors for joint human/machine interactions is necessary; called automation transparency.
Automation transparency is discussed in relation to two meanings:
1) Seeing-through transparency: this creates a direct interaction between people and automated tasks via a “a technology medium so well-designed as to appear invisible” (p1)
2) seeing-into transparency: this seeks to facilitate human-automation interaction “by revealing the automation’s responsibilities, capabilities, goals, activities, inner workings, performance, or effects to the human in real time” (p1).
They next give a brief overview of the 737 Max design. Boeing modified their existing and proven 737 design – requiring larger engine placement slightly forward and higher up on the wings compared to the previous model. To minimise the high costs of pilot training the Max was designed to replicate flight controls of the more familiar B737.
However, new placement of the engines made it more likely for pilots to angle the aircraft too steeply upwards during takeoff. Thus, a manoeuvring characteristics augmentation system (MCAS) was installed to work in the background during manual flight to automatically push the nose of the aircraft down when a certain sensor indicated that the climb angle was too elevated.
Two fatal Boeing 737 Max 8 accidents were triggered by malfunctioning sensors that fed incorrect info about climb angle to the MCAS. The MCAS was falsely activated and pushed the nose of the aircrafts down, requiring the pilots to intervene.
Importantly, they argue that “MCAS had incorrectly been presumed to be a fail-safe function that would operate reliably in the background” and because of this, MCAS was “implemented with no direct way for pilots to infer its intentions, activation” (p2).
The 737 Max MCAS was said to operate according to a seeing-through transparency – when operating correctly it gave pilots a familiar sense of control as per the previous B737, but when operating incorrectly pilots were “unexpectedly thrust into a situation where seeing-into automation transparency could have helped them tremendously” (p2).
That is, pilots had no direct way to infer what MCAS was doing. Pilots were said to have instead “ended up fighting with automation that was virtually impossible to understand and control” (p2).
Seeing-through vs seeing-into MCAS
It’s noted that the US Congressional hearing for the Max accidents highlighted that an indicator light was proposed in early designs to alert pilots to an MCAS failure – thus providing some element of seeing-into transparency. This light was later integrated into a different failure indicator.
Moreover, information on MCAS was removed from flight crew operations manual and pilot training material – seemingly permitted by the Federal Aviation Administration. This change would further reinforce the logic that the B737 was more an upgrade and expansion of a proven existing design rather than implementing new and untested elements.
It’s said that the 737 Max “transitioned from a seeing-into to a seeing-through automation transparency design approach” (p2).
Important to note, though, that the Boeing designers were “reasonably concerned about the number of indications provided to pilots about automation systems they were not expected to have to interact with” (p2). Thus, an extensive seeing-into transparency approach to automation would have “almost certainly” resulted in an information overload for operators. Nevertheless, relying almost exclusively on seeing-through approach for this system “had catastrophic consequences in this case”.
Assumptions About Pilot Reaction to MCAS Failures
Based on whistleblower reports, it’s said that Boeing officials and even in the presence of FAA regulators “admonished test pilots to be alert to MCAS failures and to respond within a few seconds with a specific control action” (p3).
That is, test pilots were instructed to be alert to issues with the MCAS but certification guidance allowed manufacturers to assume that pilots would recognise failure conditions and respond appropriately. Hence, Boeing would assume pilots would recognise issues with the MCAS due to routine work without actually having to legally inform pilots of the existence of the automation or train them on it.
Implications for Safety-Critical System Transparency
As mentioned earlier, the FAA worked with Boeing to certify the new aircraft “as a variant of an aircraft with an enviable safety record” and the MCAS system as an addition to the speed trim system, e.g. as the authors argue “Boeing took explicit steps to avoid increased FAA certification”.
Despite the outcomes, this is not a contradiction to seeing-through design approaches. The authors state that under other operational circumstances having too many seeing-into transparency elements may overload operators and distract them from more core tasks. Indeed, in many situations having seeing-through elements may be more advisable.
They also speculate on whether seeing-through would have been the safer option with MCAS in the absence of an inconsistent senor input, if sensor maintenance practices were less prone to failure or if the 737 Max interface design didn’t obscure automation issues (p3).
Or as the authors argue “Perhaps seeing-through transparency can be effectively realized if automation designers are able to consider the broader context of operation while at the drafting table” (p3).
The authors propose three key learnings (p3):
- Automation transparency is an industrial safety problem and stakeholders should expect transparency issues across domains outside of aviation (e.g. nuclear, oil & gas). The authors are equally concerned that the inability for operators on the “inner workings” of automation extends far broader than aviation.
- There’s presently no evidence-based guidance to assist designers in selecting automation transparency approaches.
- Realistic testing of transparency design solutions is necessary and there’s no known analytical alternative to this empirical testing
The paper then raises three research questions for considering transparency in safety-critical systems (p3):
- How and when are seeing-through design principles a viable approach for complex sociotechnical systems?
- Does an alternative transparency design approach exist where operators are “informed about the inner workings of automation on a need-to-know basis?” (p3)
- What criteria should designers follow to select between transparency designs and further, how can technology developers or regulators verify the transparency framework?
Conclusion
In summing up, the authors argue that Boeing designers designed the MCAS transparency so that experienced pilots would recognise the familiar control of the previous B737. However during unexpected conditions, the crew’s lack of advanced knowledge of the MCAS system “proved fatal”.
The designers provided “no means for seeing-into the MCAS or the faulty sensor data driving its erratic behavior” (p4).
Considering human interaction with automation is critical since design choices can effectively turn technology into “a black box to the operators” (p4). However in contrast, too much interaction with technology via seeing-into approaches may overload operators and “risk of decision paralysis” (p4).
Finally they remark that the “free lunch” offered by automation transparency … may be more costly than advertised” (p4).
Authors: Jamieson, G. A., Skraaning, G., & Joe, J. (2022). IEEE Transactions on Human-Machine Systems.
Study link: https://ieeexplore.ieee.org/abstract/document/9759495
Link to the LinkedIn article: https://www.linkedin.com/feed/update/urn:li:ugcPost:6932062724885467136?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3AugcPost%3A6932062724885467136%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29