This article from Dekker and Woods discusses the ‘risks of literal-minded automation’, being a “system that can’t tell if its model of the world is the world it is actually in”. This issue manifests in automated systems being wrong, strong and silent—and while the issue has existed for at least 70 years, the risk “looms… Continue reading Wrong, Strong, and Silent: What happens when automated systems with high autonomy and high authority misbehave?