Tuesday, October 18, 2016

In Praise Of Mercedes-Benz’s Killer Robot Car

I’ve been writing about the law of driverless cars since 2011. For more than forty years, the general rule for when a car was defectively designed is whether the manufacturer met “a reasonable duty of care in the design of its vehicle consonant with the state of the art to minimize the effect of accidents.” Larsen v. General Motors Corporation, 391 F.2d 495 (8th Cir. 1968).


Volvo got a lot of free press last year when it said it would accept legal responsibility for crashes involving self-driving cars, but, as always, the fine print said otherwise:

Volvo also told the BBC it would only accept liability for an accident if it was the result of a flaw in the car’s design. “If the customer used the technology in an inappropriate way then the user is still liable,” said Mr Coelingh. “Likewise if a third party vehicle causes the crash, then it would be liable.”

In other words, Volvo agreed to nothing at all. Volvo simply agreed it would be held responsible in the same circumstances under which it would already be held responsible: when there was a flaw in the car’s design.


That aspect raises an obvious question: how should driverless cars be designed? Most of the media attention has been devoted towards philosophical questions like “the Trolley Problem.”


The Trolley Problem is a philosophy exercise (from 1967) in which participants are supposed to imagine that a runaway trolley is careening towards five people. The participant is given the option to save those five people by killing another person. (In one version, the participant can push a person in front of the tracks. In another, the participant can flip a switch to divert the trolley.) Should they do it? There’s no logically “correct” answer, the exercise is meant to test how the participant values ethical and moral choices.


The application of the Trolley Problem to autonomous cars is obvious: we can dream up all kinds of scenarios in which the car is forced to decide between options that imperil the car’s occupants, the occupants of other cars, or pedestrians. Mercedes-Benz has just given its answer: focus on the occupants of the car. Christoph von Hugo, the manager of driver assistance systems and active safety at Mercedes-Benz, said, “If you know you can save at least one person, at least save that one. Save the one in the car. … If all you know for sure is that one death can be prevented, then that’s your first priority.”


I think he’s right: it makes no sense to leave a thorny moral and ethical question like this to the split-second “intuition” of a computer program.


Even in a situation as simple as the Trolley Problem, people struggle to come up with answers, and the answers they come up with don’t make sense. When people are presented with the Trolley Problem that involves pushing a stranger, most people say pushing the one person to save the others would be wrong. When people are presented with the Trolley Problem that involves flipping a switch, most people say the switch should be flipped, killing the one person to save the others. Whatever the reasons may be for this difference, humans aren’t even capable of coming up with an iron-clad moral framework for themselves, much less capable of coming up with a moral framework they could teach to a computer.


As a practical matter, traffic accidents rarely involve intriguing philosophical dilemmas. Generally, everyone is safest when everyone looks out for their own safety, too. In most accidents, the “solution” that’s best for the occupants for the car is usually the solution that’s best for everyone else: slow down and avoid hitting anything. Programming the computer to delay that option while it tries to resolve a dilemma with imperfect information runs a risk of creating unforeseen complications. For all the times we can imagine an autonomous car driving avoid pedestrians to avoid a tractor-trailer, we can equally imagine an autonomous car trying to avoid an accident with cars behind it by accelerating onto a sidewalk and hitting pedestrians the car didn’t see before it moved over.


These issues aren’t just idle speculation. They’re real problems that are already starting to confront us on the road, like with the death of a man using Tesla’s “autopilot” when his car ran under a tractor-trailer. Tellingly, as that article notes, the death could have prevented if trucks in the United States were required to have Mansfield bars on the sides, like they’re required to have in Europe. Never doubt the power of an industry to get in the way of safety regulation.


To their credit, the Department of Transportation and the National Highway Traffic Safety Administration have approached this new technology carefully. Last month, the DOT and NHTSA released guidelines for self-driving cars, but stopped short of issuing official regulations. Safety advocates like myself have long worried that the car industry — joined by Google, Uber, and other technology companies — would convince the federal government to preempt state regulation and lawsuits regarding autonomous cars, which would remove the primary incentive car companies have to ensure driverless cars are as safe as they can be.  


It’s still too early to say which way the law will go on autonomous cars, but I do know this: like with Volvo patting itself on the back for agreeing to bear the minimum responsibility required by law, the car companies and the tech companies will keep lobbying like crazy to avoid responsibility. We can’t let philosophical exercises distract us from the simple truth that lawsuits are the primary reason cars are safer today.


In Praise Of Mercedes-Benz’s Killer Robot Car posted first on http://helloinjuryhelpnowposts.tumblr.com

No comments:

Post a Comment