Self-driving cars have become a wildly popular topic, as the likelihood of their adoption is now greater than ever before. However, autonomous vehicles, while undergoing extensive advancements in development, are still facing many challenges that their developers must confront. Among these is the possible inclusion of morality in these machines that are seemingly incapable of such a humanlike attribute. Autonomous cars will possibly need to be designed with the ability to decide who lives and who dies in an accident.
Ideally, this sort of problem would be avoided with driverless cars. Globally, nearly 1.3 million people die in car accidents every year, an amount that the adoption of autonomous vehicles could reduce by 90 percent. While this certainly makes the roads safer, it is virtually impossible to completely eliminate car accidents, especially when external factors can be at fault. Self-driving cars, with their advanced programming, will have to determine the outcome of an accident, which could very well involve human deaths.
This issue derives from the ethical question of The Trolley Problem, which was first posed by Philippa Foot in 1967. This thought experiment posits a scenario in which you notice a runaway train that will slaughter five innocents tied to the train’s tracks unless you pull a lever. However, if you pull this lever, while saving the five people in danger, the train would switch to another track where it will strike and kill one unsuspecting man. What do you do?
This question has sparked many different responses, in addition to questions that alter the scenario. One of the better-known alternatives is another thought experiment that has been known as The Fat Man: the scenario still includes five individuals in the way of harm, but the only way to stop the moving freight is by pushing an individual who is large enough to stop it in front of the train.
There truly is no correct answer to either of these questions, but most people would opt to pull the lever in the first scenario to save more lives overall. Whether they know it or not, people who choose this path favor a more-utilitarian approach, as they are putting ahead the good of the many. For the second option, the wholly utilitarian decision would be unchanged: push the fat man in front of the train and save more people overall. However, this option has raised additional moral concerns since it involves you directly ending an innocent’s life, as there is no switch to accomplish the task, but only your own hands.
For self-driving cars, the decision would obviously be held with the car. If an autonomous vehicle is unable to stop in time and must choose between hitting one person or five, it will most likely take the utilitarian approach.
However, when factoring in other variables, the question becomes far more complicated. The Fat Man is a much more difficult question to answer if you are the fat man. It is even harder if you are the fat man and lack control over the decision. For instance, in the event of an accident, your driverless car might have to end your life to save the lives of those outside of the automobile.
This, in turn, brings up an interesting question: would people want to buy a car that might decide to kill them? Even though many people are utilitarian, they also happen to care greatly their own utility. If this discourages consumers from purchasing driverless cars for their own safety, it could prevent or delay the widespread use of driverless cars, which would actually make the roads safer for drivers.
Other than being safer, autonomous vehicles are cleaner and can seriously reduce traffic and congestion. It is plausible that the anxieties towards self-driving cars are no different than those felt for new technology in the past, and as developers figure out all the kinks with these automobiles, people might gain an understanding and stronger trust of them and their technology. While the thought of giving these machines control over life and death in certain situations might seem frightening, their capabilities are only an advancement of the programming that exists in our own minds to handle such a situation.