Robot Cars Can’t Count on Us in an Emergency

by  John Markoff,  The New York Times

Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.

The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.

“We saw stuff that made us a little nervous,” Chris Urmson, a roboticist who was then head of the project, said at the time. He later mentioned in a blog post that the company had spotted a number of “silly” actions, including the driver turning around while the car was moving.

Many automotive technologists are skeptical that autonomous cars will be able to trust humans in emergency situations. Self-driving car technology currently is focused on humans taking over when the computer cannot decide what to do, but some experts think the challenge of quickly bringing a distracted driver back into control of a fast-moving car cannot be overcome.

To outline a development path to complete autonomy, the automotive industry has established five levels of human-to-machine control, ranging from manual driving — Level 0 — up through complete autonomy, Level 5. In the middle, Level 3 is an approach in which the artificial intelligence driving the car may ask humans to take over in an emergency.

But many automotive technologists are skeptical that the so-called handoff from machine to human can be counted on, because of the challenge of quickly bringing a distracted human back into control of a rapidly moving vehicle.

“Do you really want last-minute handoffs?” said Stefan Heck, chief executive of Nauto, a start-up based in Palo Alto, Calif., that has developed a system that simultaneously observes both the driver and the outside environment and provides alerts and safety information. “There is a really good debate going on over whether it will be possible to solve the handoff problem.”

Nauto’s data shows that a “driver distraction event” occurs, on average, every four miles. Mr. Heck said there was evidence that the inattention of human drivers was a factor in half of the approximately 40,000 traffic fatalities in the United States last year.

Research by Stanford University scientists found that most distracted drivers needed more than five seconds to regain control of a car when they were suddenly required to refocus on driving.

They also found regaining control at a high speed is markedly different than in slower-moving vehicles. Nevertheless, the auto industry is investing extensively in artificial intelligence to elevate car safety before implementing full autonomy. Roboticist Gill Pratt says technologies that perceive risks as much as 15 seconds ahead of time may be needed to realize a self-driving car that humans can safely commandeer in emergencies. ……  Read the article

Leave a Reply