The trolley problem, a cornerstone of ethics thought experiments, is often used within hypothetical scenarios we are unlikely to ever have to face. A new application created by a team of researchers at MIT alters this norm. The ethics of technology solutions and their interactions with humans is an increasingly important field to consider with human programming determining, in some cases, choices that can take life away. The example used in this Moral Machine relates to the phenomenon of self- driving cars.
The ethical implications of indirect programming are illustrated to a particularly unsettling extent within this experiment, not least due to the explicit ranking of human value required in many decisions. With no definitive guide on the 'right' way to protect life, the responsibility lies in the hands of policy makers to proactively legislate in a rapidly automating world. Remaining on a legislative back footing as has remained apparent with a persistent lack of precedent or law on how to sanction crimes taking place on online platforms, threatens the efficacy of government within a modernised world. Thus the use of such tools, even the more informal online game, can provide useful guidance in navigating the new requirements of ethical programming.
Try the game for yourself here: http://moralmachine.mit.edu/