The Moral Dilemma for Self-Driving Cars
Is there a moral dilemma for self-driving cars? As the time nears when autonomous cars may make a full entry into the marketplace, ethical questions regarding their programming may impact both public safety and the actual adoption by the public of autonomous cars. In 2015, 4.5 million people were seriously injured and almost 40,000 people were killed in traffic accidents. A large number of the accidents that occur every year are due to human error. The thought about autonomous cars is that removing the potential for human error will drastically cut down the injury and fatality rates by preventing accidents. A recent study shows a moral dilemma that exists when autonomous cars would be forced to make decisions about protecting the safety of their occupants or instead those of pedestrians.
A question of the public good versus self-sacrifice: The study
Researchers in the U.S. and France were interested in exploring an ethical dilemma that could arise when autonomous cars are programmed. Specifically, when the cars encounter situations in which the cars could act in order to preserve the lives of their passengers or instead to preserve the lives of pedestrians were studied. The researchers surveyed 2,000 participants, a majority of whom agreed that they thought cars programmed to save the greatest number of people over protecting the passengers in the vehicles was a good idea. However, when they were presented with the idea of actually purchasing a vehicle that was programmed with such a utilitarian purpose, a majority then stated that they would not want to own a car that was not programmed to protect them and their families regardless of how many other lives could be potentially lost.
In all, 76 percent responded that they believed vehicles should be programmed in a way that preserves the greatest number of people’s lives. However, when they were asked to rate how important they thought programming features for a car were, the participants rated self-preservation at 50 and self-sacrifice at only 19, showing a clear discrepancy.
The trolley problem
The study was an update on a classic ethical dilemma called the trolley problem. In that scenario, a driver of a runaway train would have a choice between hitting five workers who were tied to one track or instead veering off onto a different track to hit only one worker. The idea is the sacrifice of one for the good of many. With the current study regarding autonomous vehicles and the potential programming of them, the question remains about whether or not the government will mandate programming to protect more lives even if doing so would be at the expense of the people riding in the cars, or if the government would instead not make such mandates, allowing the manufacturers to focus the programming on self-preservation rather than the public good.
As the study shows, people may be less willing to purchase these vehicles if they are programmed to save the greatest number of lives rather than the lives of the purchasers and their families. This could potentially impact the ultimate marketability of the vehicles, affecting whether or not autonomous cars ever truly become popular with the public at large.
Autonomous cars are still being tested and are not yet ready for the marketplace. Programming has not yet caught up to the point in which it may be used in a way to allow vehicles to make moral decisions regarding whose lives to save. While autonomous cars do have the potential to save a large number of lives each year, the public would still need to be willing to purchase them in order to see any real and lasting positive impact.