The Trolley Problem Revisited

The trolley problem, a cornerstone of ethics thought experiments, is often used within hypothetical scenarios we are unlikely to ever have to face. A new application created by a team of researchers at MIT alters this norm. The ethics of technology solutions and their interactions with humans is an increasingly important field to consider with human programming determining, in some cases, choices that can take life away. The example used in this Moral Machine relates to the phenomenon of self- driving cars.
Users are invited to make judgements in a number of situations given the event of a self driving car experiencing brake failure leading to unavoidable casualties; the only choice is who will be killed. So far, this research has gained 40 million contributions from people over 233 countries, allowing for detailed analysis of preferences and decisions. The strongest preferences, (reassuringly) placed human life over animal lives whilst also seeking to save the most lives possible. Weaker preferences valued female life over men and actively reacting rather than remaining passive. (see below for the economists complete breakdown of preferences)

The ethical implications of indirect programming are illustrated to a particularly unsettling extent within this experiment, not least due to the explicit ranking of human value required in many decisions. With no definitive guide on the 'right' way to protect life, the responsibility lies in the hands of policy makers to proactively legislate in a rapidly automating world. Remaining on a legislative back footing as has remained apparent with a persistent lack of precedent or law on how to sanction crimes taking place on online platforms, threatens the efficacy of government within a modernised world. Thus the use of such tools, even the more informal online game, can provide useful guidance in navigating the new requirements of ethical programming.
Try the game for yourself here:

No comments:

Post a Comment