Should a Driverless Car Decide Who Lives or Dies?

by  Keith Naughton,  Bloomberg News

Automakers and Google are pouring billions into developing driverless cars. This week Ford said it was moving development of self-driving cars from the research lab to its advanced engineering operations. Google plans to put a “few” of its self-driving cars on California roads this summer, graduating from the test track.

With driverless vehicle technology development underway and being fast-tracked by major automakers, a key question concerns whether such technology should make ethical choices.

Many car manufacturers are looking to Stanford University’s Center for Automotive Research (CARS), which is focused on programming cars to make ethical decisions. It is hoped that self-driving vehicles will anticipate and avoid collisions, but in cases in which accidents are unavoidable, the car may have to choose between two evils, such as swerving onto a crowded sidewalk to avoid being rear-ended by another vehicle, or staying put and imperiling the driver. Article

Among the questions ethicists are wrestling with are whether rules guiding autonomous vehicles should stress the greater good and save the most lives, while assigning no value to the individuals involved.

“Driverless cars are going to set the tone for all social robots,” predicts California Polytechnic University’s Patrick Lin. “These are the first truly social robots to move around in society.”

CARS director Chris Gerdes this summer will be testing driverless vehicles programmed to observe ethical rules to make split-second decisions. One rule governs when it is appropriate to violate traffic laws and cross a double yellow line to make room for cyclists or double-parked vehicles.

DCL; We are entering the realm of predictive event processing in a big way here. Ethics in the decision making aspects of CEP is new territory, I think.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.