
As artificial intelligence insinuates itself into more areas of human life, and as our lives are increasingly colonized by machines guided by AI, consumers and industry alike face serious questions about how we understand and negotiate tradeoffs between risks and benefits. While moral philosophers might protest that considerations of cost and benefit are but a tiny sliver of the questions we should be asking about the revolution that AI will purportedly bring, many are resigned to the realization that a rational calculus of costs and benefits will carry the day.
While consumers are by and large enthusiastic about the arrival of autonomous vehicles, they have also expressed concerns about how these vehicles will distribute risks and benefits. While the anticipated ability of driverless cars to reduce road deaths has been lauded by observers, there is also an unease among many consumers about putting their life in the hands of a machine. The widespread adoption of driverless cars depends on investigating, diagnosing, and overcoming this unease. What does it consist in? Is it rational?
One concern is that driverless cars will make mistakes that humans would not have made. Consider the crash of a Tesla Model S in May of 2016. That Tesla, operating on Autopilot, mistook a white semi-trailer for the bright daytime sky, and plowed into it at full speed, killing its driver (or, more properly, its passenger). This is an accident that probably would not have occurred had the driver been paying attention to the road. Simply put, this machine made a mistake that a human would not have made.
These may be the most serious mistakes that autonomous vehicles make. Even accepting that they will reduce the number of injuries and fatalities from car accidents, they will not simply shrink the set of people injured or killed — they will move the set. Different people will end up being injured or killed than would have been before. So, while driverless cars probably stand to save the lives of many, they will also endanger the lives of others, shifting the risk of harm. Some might take this to be a reason to oppose driverless cars, since we are no longer simply saving people from injury, but endangering some to spare others.
The general form of this worry was discovered by philosophers in the early 1980s, and since then ethicists have been studying the ethics of shifting harm between sets of people. It turns out that many – or most – large-scale policy changes in society lead to similar problems. These policies change not just the total amount of benefits and harms in society, but the people who experience them.
Is it possible to justify such policies when they harm people who would not otherwise have been harmed? The answer seems to be yes. Take an example of another policy: Nonprofits and governments have been trying to dis-incentivize drunk driving for decades. And this effort has been successful: deaths from drunk driving have fallen precipitously since the early 2000s, for example. But notice that this policy has presumably resulted in the deaths of some people who would not otherwise have died. Many of the people who would have driven drunk instead chose to get a ride with a friend or take a cab. And, presumably, several of those trips result in an injury or death that would not have otherwise happened. This is obviously not to say that allowing drunk driving is preferable — in fact, this example is meant to show exactly the opposite: that even policies that are obviously justified will sometimes result in people being injured or killed who would not otherwise have been. The appropriate yardstick of such policies is the net number of lives and injuries we expect them to prevent.
There may be other legitimate reasons to be nervous about the arrival of autonomous cars — for example, repeated demonstrated concerns with their cyber security; concerns about whether they will be fully accessible, for example, to the blind or deaf; and concerns about whether they will alleviate or, rather, exacerbate traffic congestion. The anxiety about turning over life or death decisions to machines may yet be warranted. But if it is, it is not because they might injure or kill people who wouldn’t otherwise have been injured or killed. And we should not let nebulous or poorly considered worries stand in the way of significant societal benefit.
Contributing Author: Ryan Jenkins, Assistant Professor of Philosophy, California Polytechnic State University, San Luis Obispo