This commentary from de Winter and Eisma argues that Human Factors & Ergonomics (HFE) may be “losing credibility” and significance. Despite claims about being a thriving science, it’s argued that the discipline may be at risk of slowly fading because of some of these challenges. This paper had several follow-up articles and rebuttals from other… Continue reading Ergonomics & Human factors: fade of a discipline
Tag: automation
Human Success: Old wine in new bottles, or a shift of mindset for HRA in an automated world?
A really interesting conference paper from Andreas Bye, discussing whether shifting Human Reliability Analysis (HRA) terminology from human error to human success would help alleviate some of the blame-connotations. Also discussed is the human role in automated systems. It was meant to be a mini-post with a few dot-points and a couple of images, but… Continue reading Human Success: Old wine in new bottles, or a shift of mindset for HRA in an automated world?
Automation’s lacklustre effects on fatal accidents & cheap migrant labour hampering adoption of engineering controls
REALLY interesting findings from Associate Professor Masahiro Yoshida. It suggests that automation over a historical context didn’t really drive down workplace injuries since it tended to be employed in already mature industries. And, ready access to cheap migrant workforces may hinder broader industrial risk reduction due to a negative correlation with automation investment. And the… Continue reading Automation’s lacklustre effects on fatal accidents & cheap migrant labour hampering adoption of engineering controls
The ironies of ‘human factors’
This brief book chapter from Hollnagel & Dekker adopts Lisanne Bainbridge’s idea of ironies, in the context of contemporary human factors practice. Can’t cover all the points. Highly recommend you check out Bainbridge’s original paper, though (link in comments). Ironies in this context is a “solution which expands rather than solves or eliminates a problem,… Continue reading The ironies of ‘human factors’
Wrong, Strong, and Silent: What happens when automated systems with high autonomy and high authority misbehave?
This article from Dekker and Woods discusses the ‘risks of literal-minded automation’, being a “system that can’t tell if its model of the world is the world it is actually in”. This issue manifests in automated systems being wrong, strong and silent—and while the issue has existed for at least 70 years, the risk “looms… Continue reading Wrong, Strong, and Silent: What happens when automated systems with high autonomy and high authority misbehave?