HB Ad Slot
HB Mobile Ad Slot
Why Autonomous Cars Aren’t Hit by the Trolley Problem
Monday, June 27, 2016

Imagine you are driving down a two-lane mountain road.  As you round a bend, you see five pedestrians in your lane.  You do not have enough space to stop before hitting them, but in the other lane there is only one pedestrian.  Do you stay in your lane, killing five? Do you swerve into the other lane, killing one?  Or do you steer off the road and down the mountain, avoiding the pedestrian fatality but likely killing yourself?  Can you make that decision better than an autonomous car?

You may recognize this scenario as the trolley problem studied in many introductory philosophy classes.  There are many articles arguing that trolley problem scenarios present an ethical dilemma for autonomous cars, and are a block to the cars’ continued development.  But that scare tactic makes a weak argument against autonomous driving.

First, the trolley problem may not be much of a moral quandary after all: in a study of professional philosophers, 90% of the respondents (when “other” responses were removed) agreed that switching lanes was the ethical choice. That consensus parallels a study on how autonomous cars should approach the trolley problem, in which the participants were “generally comfortable” with autonomous cars that valued equally pedestrian and occupant lives.  Simply put, people agree that the cars should weigh all lives equally.

This approach aligns with the principles underlying most states’ product liability laws, which apply the “risk-utility” test to determine if a product is defectively designed.  That test does not discriminate between the safety of vehicle occupants and the safety of the general public.  Manufacturers are less likely to be liable under risk-utility tests if they design autonomous cars to value occupant lives and third parties’ lives equally.

Second, trolley problem enthusiasts assume that autonomous cars use rule-based thinking that require specific programming for such a scenario.  They don’t.  Typically, autonomous cars use machine learning—where the car develops a probabilistic understanding of how to respond to the world around it— rather than specific, human-provided rules.  Humans, however, provide feedback on past responses, and the car adapts to make better decisions in the future.  This allows the software to address situations unimagined by the programmers, but prevents programmers from creating rules like “always swerve to avoid ten pedestrians when you can hit a single, different pedestrian.”  As a result, with machine learning “you have no way of knowing why a program could come up with a particular rule,” but can avoid the “helpless dither” that can trap a rules-based system experiencing an unforeseen situation.

Finally, autonomous cars will never be “programmed to kill,” because fatality is known only after a crash, not before.  Pre-crash, there are too many variables involved for a car to know it will kill someone.  Autonomous driving systems will never ask “who should I kill” because they cannot know the consequences of any specific crash.  They instead ask “what is the least dangerous thing I can do at this moment? ”  For example, Google’s autonomous car first avoids collisions with pedestrians and cyclists, then avoids other vehicles as a secondary priority, and finally avoids fixed objects.  But that does not mean that Google knows that a specific pedestrian collision would be fatal.  Advocates of the trolley problem assume certainty that does not ever exist in the real world.

The lesson here is that the trolley problem is a tool used by philosophers to advance the study of philosophy, not to program real-world morality.  Even a Harvard philosopher conducting trolley problem experiments has acknowledged that “[t]he trolley problems don’t tell us what we’d do if we actually faced an out-of-control streetcar,” much less how to drive a car on the street.  Instead of trying to program rules to address specific variants of the trolley problem, manufacturers should continue their work to develop safe, reliable autonomous cars that maximize safety for all travelers.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins