Cross-posted to Cognition and Evolution.
Trolley problems are moral exercises (or experiments) where participants are asked to choose between horrible choices, to see how our moral reasoning works. For instance: do you push one person onto the tracks of an out of control trolley to keep five people further away from getting run over? Or do you just not say anything if you see a single careless person wandering into the trolley's path of his own accord, to keep those five people safe? Among other interesting observations in these experiments, we are inconsistent, and people who claim vastly differing moral foundations (religious and atheists, for example) tend to choose the same answer, as long as they're otherwise from the same cultural background (e.g., they're both American). Some enterprises already exist attempting to "solve" morality, that is to be able to program it into a computer, partly motivated by a belief in the impending technological singularity.
Trolley problems are criticized as being able to give us a window into actual moral reasoning for some of the same reasons same reasons as any consequence-less self-report method. But literal trolley problems are now becoming consequential, now that we have driverless cars. Their engineers have to encode how these cars will decide who to hit, if they find themselves in a situation where they have to hit someone. If they have a choice between a helmeted and non-helmeted cyclist, shouldn't they aim for the helmeted cyclist to minimize harm?
If only things were as simple as Asimov's Three Laws of Robotics.
Thursday, May 29, 2014
Subscribe to:
Posts (Atom)