Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
. 2017 Apr;107(4):532–537. doi: 10.2105/AJPH.2016.303628

Public Health, Ethics, and Autonomous Vehicles

Janet Fleetwood 1,
PMCID: PMC5343691  PMID: 28207327

Abstract

With the potential to save nearly 30 000 lives per year in the United States, autonomous vehicles portend the most significant advance in auto safety history by shifting the focus from minimization of postcrash injury to collision prevention.

I have delineated the important public health implications of autonomous vehicles and provided a brief analysis of a critically important ethical issue inherent in autonomous vehicle design.

The broad expertise, ethical principles, and values of public health should be brought to bear on a wide range of issues pertaining to autonomous vehicles.


The public’s health has been dramatically affected by improvements in automotive design, such as seatbelts and automatic airbags, yet nothing portends a more significant reduction in morbidity and mortality rates from motor vehicle accidents than autonomous vehicles, sometimes known as “driverless, “robotic,” or “self-driving” cars.1,2 Motor vehicle safety ranks among one of the past decade’s “ten great public health achievements”3 in the United States, right up there with tobacco control, prevention and control of infectious disease, and occupational safety.4 Autonomous vehicles, which could reduce traffic fatalities by up to 90% by eliminating accidents caused by human error—estimated to be 94% of fatalities—could save more than 29 000 lives per year in the United States alone.5,6 Around the world, autonomous cars could save 10 million lives per decade, creating one of the most important public health advances of the 21st century.7,8

Although crash avoidance or mitigation of harm caused by motor vehicle accidents are specifically public safety issues, for simplicity I have included public safety issues under the intellectual umbrella of public health. From the vantage point of public health, the overarching goal is to transform the current approach to automotive safety from reducing injuries after collisions to complete collision prevention. Although the feasibility of creating an autonomous vehicle that never crashes is debatable and, by some analyses, impossible to achieve—considering the burst of enthusiasm, investment, and effort in autonomous vehicle technology—it is time to reflect on the many public health issues that have not yet been adequately analyzed or discussed.9,10

Vehicles equipped with automated driving systems are described in the literature as “autonomous,” “driverless,” “robotic,” or “self-driving,” yet it is important to clarify distinctions and use terms consistently. SAE International (formerly the Society of Automotive Engineers) specifies 5 levels of automation, and the US National Highway Traffic Safety Administration recently adopted this system.11–13

Levels start at level 0—no automation—which relies on a human driver, full-time, for all aspects of driving. In level 1—driver assistance—the system sometimes assists with a specific task, like steering or acceleration and deceleration, with the human driver performing all remaining tasks. In level 2—partial automation—the system performs tasks, such as steering along with acceleration and deceleration, and the human monitors and is otherwise fully responsible for the remainder of driving tasks. In level 3—conditional automation—the system manages all driving tasks and monitors the driving environment, and the human intervenes only when the system requires assistance. In level 4—high automation—the system drives and monitors in certain environments and conditions without a human response and is considered fully autonomous in many driving scenarios, and the system performs even if a human driver does not respond appropriately to a request for intervention. Finally, in level 5—full automation—the system does everything a human driver could do under all conditions, matching or exceeding a human’s capabilities in every driving scenario.

A key distinction is that in levels 1 and 2, a human driver monitors the driving environment, whereas in higher levels the driver can cede control under certain conditions and an automated driving system will monitor the driving environment and take control. Some vehicles may have multiple features that allow them to operate at different levels depending on which levels are engaged. I, focusing on ethics and public health, emphasize vehicles that can drive themselves independently, without human intervention or continuous monitoring, at least some of the time. I refer to this type of vehicle, classified in SAE levels 3, 4, and 5, with the generic term “autonomous vehicle” to enhance clarity and simplicity.

Autonomous vehicles are on their way. Google began their test project in 2009 and has clocked more than 1.5 million miles with test drivers aboard in California, Texas, Washington, and Arizona,14 and then-President Barack Obama proposed spending $4 billion to “accelerate the acceptance” of autonomous vehicles in the United States.15 In August 2016, Singapore led the innovation race with the world’s first autonomous taxis operated by nuTonomy, a highly autonomous vehicle software startup with the goal of creating a fully autonomous fleet by 2018.16 In nuTonomy’s test phase, a human driver sits in the front seat prepared to take the wheel if necessary while a backseat researcher monitors the vehicle computers.

Not to be outpaced, Uber established the Advanced Technologies Center in Pittsburgh, Pennsylvania, with the goal of “bringing safe, reliable transportation to everyone, everywhere” and, in September 2016, began testing autonomous vehicles with live passengers and an ancillary human driver on Pittsburgh streets.17,18 The potential worldwide market is huge, and international automakers— including Volvo, Nissan, Volkswagen, Audi, Tesla, and Ford—are rapidly exploring autonomous vehicle technology. On a larger scale, prototype autonomous buses were tested in Switzerland and Finland, and autonomous trucks are already being tested on highways in Colorado and Nevada.19–22

Autonomous vehicles are replete with public health issues that have ethical implications that warrant cogent analysis and informed response.23 Several recent symposia have discussed the ethical issues of autonomous vehicles but did not have a specifically public health focus.24–27 Conversely, a recent symposium on autonomous vehicles at the Johns Hopkins Center for Injury Research and Policy in the Bloomberg School of Public Health and a recent report by the Altarum Institute examined autonomous vehicles and the role of public health but did not focus sustained attention on ethical issues.28,29 All this important work sets the stage for future academic symposia, publications, public hearings, and community conversations that should examine, in depth, the important ethical and public health ramifications of autonomous vehicles.

APPLYING PUBLIC HEALTH ETHICS

The introduction and potential proliferation of autonomous vehicles present the classic challenge of balancing the freedom of private manufacturers to innovate with government’s responsibility to protect public health.30 Autonomous vehicles raise many public health issues beyond their potential to improve safety, ranging from concerns about more automobile use and less use of healthier alternatives like biking or walking to concerns that focusing on autonomous vehicles may distract attention and divert funding from efforts to improve mass transit. There are, additionally, issues of access, especially for the poor, disabled, and those in rural environments.

There are important and complex policy and regulatory concerns; insurance issues, including the possibility of a no-fault auto insurance system for autonomous vehicles; product and tort liability issues; and issues pertaining to privacy and cybersecurity for all communications into and within the vehicle, all of which are beyond the scope of this article.31–38 Finally, we have just begun to explore the effect autonomous vehicles will have on traffic, pollution, and the built environment.28 Clearly, many issues affect the health of the public beyond accident prevention and, with their considerable skills as researchers, data analysts, policy advocates, and community catalysts, public health leaders have much to contribute to conversations about health impacts, equity, social justice, and the values of public health.39

I provide an example and brief analysis of a very important ethical issue for autonomous vehicles; the algorithms being created for autonomous vehicles in situations of forced choice, such as whether to hit a parked car or a pedestrian on an ice-covered road. I argue for greater involvement starting now, during the design phase, of public health leaders and describe how the values of public health can guide conversations and ultimate decisions. By reflecting on the ethical and social implications of autonomous vehicles and working collaboratively with designers, manufacturers, companies like Uber and nuTonomy, city health departments, the public, and policymakers on the local, state, and federal level, public health leaders can help develop guidelines that foster equity and safety across the population.

A widely cited framework for public health ethics provides a starting place for public health leaders to frame the questions and influence the decisions that will be made in the coming months and years.39 As the classic Code of Ethics for Public Health40 recommends, public health advocates can advocate the rights of individuals and their communities while protecting public health by helping to establish policies and priorities through “processes that ensure an opportunity for input from community members.” 40(p1058) Public health thought leaders can ensure that communities have the information they need for informed decisions about whether and how autonomous vehicles will traverse their streets, and they can make sure that manufacturers who test and deploy autonomous vehicles obtain “the community’s consent for their implementation.”40(p1058) Finally, public health leaders can work for the empowerment of the disenfranchised, incorporating and respecting “diverse values, beliefs, and cultures in the community” and collaborating “in ways that build the public’s trust.”40(p1058)

CAN GOVERNMENT REGULATION KEEP PACE?

Autonomous vehicles constantly obtain information from their environment, using a variety of sophisticated cameras and sensors that rely on ultrasound, radar, and laser-based ranging, or “lidar.” A variety of advanced technologies enable autonomous vehicles to correct for human mistakes and “learn” from the “experience” of other autonomous vehicles. Because all store sensor data, engineers are able to reconstruct events of a crash and examine what the vehicle sensed through its multiple inputs and analyze the logic it used to determine its course. Manufacturers and software developers can then use this information to modify the car’s program and thereby improve future decisions. Yet each such improvement, every choice, is replete with ethical assumptions. However, machine learning has only begun to explore moral behavior—or ethical crashing algorithms—for autonomous vehicles.41,42 Is it better to kill 2 autonomous vehicle passengers or 2 pedestrians? One person or 1 animal? Collide with a wall or run over a box with unknown contents? Forced choices like these must be programmed in with sophisticated algorithms that, ultimately, rest on fundamental—but largely unarticulated—ethical assumptions.

In September 2016, the US Department of Transportation issued the very first Federal Automated Vehicles Policy.13 In that document, the department highlighted the transformational change affecting the automobile industry, stating that autonomous vehicles “may prove to be the greatest personal transportation revolution since the popularization of the personal automobile nearly a century ago.”13 Among other things, the department provides 15 safety standards for testing what they refer to as highly autonomous vehicles, including fully self-driving cars, and requiring safeguards for system failures. On the single page devoted to ethical issues of autonomous vehicles in the 116-page policy, the department recognizes the complexity and challenges of ethical issues, such as forced-choice algorithms, and states:

This discussion is intended only to introduce the relevance and importance of ethical considerations to the development and deployment of HAVs [highly autonomous vehicles]. It is not intended to be exhaustive or definitive or to answer ethical questions, but rather only to raise the general topic of ethics as worthy of discussion and consideration by manufacturers, consumers, government, and other stakeholders.13(section 11)

FORCED-CHOICE ALGORITHMS

We can anticipate that many ethical and policy issues will be raised by the adoption of autonomous vehicles. Autonomous vehicles present classic ethical conflicts between an individual’s interest—that passengers arrive quickly, cheaply, and safely at their destination—and the community’s interest—that roads be safe for all travelers, including passengers in both autonomous and driver-dependent vehicles, as well as bicyclists and pedestrians.43 Our government has various duties to protect the population, including increasing the availability of information to the public and to decision-makers, protecting people from harm, and providing the conditions under which people can lead healthy lives.44

Public health ethics additionally recognizes the need to reduce health inequalities and protect vulnerable groups. Although certain freedoms are infringed on—I cannot, for example, build my own highway off-ramp to provide myself a shortcut to my house—such restrictive laws balance individual freedom with community benefit or prevention from harm. The political process includes balancing competing interests and, further, extends to regulating the behavior of others (in this case, autonomous car manufacturers and companies, such as Uber and nuTonomy, which deploy the vehicles) and establishing mechanisms for ongoing transparency and accountability. Although autonomous vehicles may help reduce morbidity and mortality from motor vehicle accidents, their design and use must be tempered by regulations that are devised following an informed, collaborative political process that meets the objectives and aligns with the values of public health.

Driving in the real world frequently poses ethically challenging situations requiring drivers to make sophisticated, nearly instantaneous, ethical decisions, and it is simplistic to assume that self-driving cars need only follow the rules of the road. Driving examples abound, as when a driver deliberately crosses a double yellow line into an empty lane reserved for oncoming traffic rather than hit a person changing a tire on the shoulder of the road or when a driver goes through a red traffic signal to get out of the path of an oncoming train. The autonomous vehicle, like the human driver, must balance safety, mobility, and legality when those objectives conflict. A research scientist at the University of Virginia Transportation Research Council states that automated vehicles

must decide quickly, with incomplete information, in situations that programmers often will not have considered, using ethics that must be encoded all too literally in software. Fortunately, the public doesn’t expect superhuman wisdom but rather a rational justification for a vehicle’s actions that considers the ethical implications. A solution doesn’t need to be perfect, but it should be thoughtful and defensible.45

Yet just what is required for a decision to be ethically defensible? Hypothetical situations illustrating ethical conflicts between competing undesirable outcomes in which an agent must make a choice have long been the objects of philosophic debate and can help illuminate the kinds of ethical issues involved here. The trolley problem, a scenario created by philosopher Philippa Foot in 1967 and popularized by many other philosophers and cognitive scientists since, presents a conflict that has been widely cited in discussions of self-driving cars.46,47 Although there are some points of disanalogy that I will not discuss, in its simplest form the trolley problem supposes that there is a runaway trolley on train tracks heading directly for 5 people who are, inexplicably, tied to the tracks.48 You, the reader, are standing beside a lever that, if pulled, will switch the trolley to a different track that has only 1 person tied to it. You can either do nothing, allowing the speeding trolley to kill the 5 people on the main track, or divert the trolley by pulling the lever, resulting in the death of just 1 person. The thought experiment asks which choice is most ethically justifiable.

For US drivers on the horns of this kind of dilemma in the real world, the sudden emergency doctrine and the unavoidable accident doctrine provide legal protection in some states for reasonably prudent human drivers who make questionable choices under very limited and extenuating circumstances.49,50 We must consider whether the decisions made by autonomous vehicles should be legally protected in the same way. Will manufacturers and vehicle owners avoid liability in such situations? Although the need for the implementation of a forced-choice algorithm may arise infrequently on the road, it is important to analyze and resolve such issues as much as possible early in the development phase.

Of course, one can simply tally the death toll and argue on a utilitarian basis that the death of 1 person is preferable to the death of 5, or resort to a straightforward rule-based approach that applies a seemingly inviolable rule, such as “do not kill.” Yet, in addition to providing inconsistent directives, such simplistic approaches miss the complexities of forced-choice situations. Is it worse to actively pull the lever to change course than to just let things happen as fate allows? Is it really better to just stand there and watch to avoid breaking a rule? Should we quickly assess the social value of the 5 potential victims versus the 1 victim, noting perhaps that the 5 are wearing Nazi uniforms and the 1 is dressed as a nurse? Would the death of children be more repugnant than the death of elderly adults? Should pregnant women count twice, once for themselves and once for the fetus? Finally, in an accident causing injuries but not fatalities, should algorithms prioritize decisions by the likelihood, severity, and quality of life effects of various types of injuries as well as the number of people injured?

Perhaps some data will help. In a recent empirical study of autonomous car ethics, participants were given various hypothetical forced-choice accident scenarios and asked to choose between the death of 1 or more pedestrians and the death of a passenger or several passengers in the autonomous vehicle.51,52 The study found that 76% agreed that the most justified approach was the utilitarian approach in that the autonomous vehicle sacrificed its own passengers if that would result in saving more lives overall (n = 182; 95% confidence interval = 69, 82). However, when it came to purchasing an autonomous vehicle, respondents were significantly less likely to buy an autonomous vehicle if they and their family were the passengers to be sacrificed in a forced-choice accident scenario than if they and their family members were not sacrificed for the greater good (scale = 1–100; median = 19; P < .001).52 In short, study participants wanted other people to buy vehicles that made utilitarian choices to preferentially save the most people but preferred to purchase a vehicle that preferentially protected its own passengers.

For those who want to challenge their own ethical decision-making, the Massachusetts Institute of Technology Media Lab has created an online platform that generates autonomous vehicle accident scenarios.51,52 Results gathered from previous users enable participants to see how their responses compare with those of others, vividly illustrating the complexity of the moral choices and the range of outcomes. Although some have argued that experimental analyses of the trolley problem suffer from low external validity, and the early work has uncovered a wide range of results, projects like these aptly demonstrate the inconsistencies in participants’ ethical reasoning and the need to think carefully about the ethical challenges of forced-choice algorithms.53

As public health experts think about forced-choice scenarios, concerns for fairness, equity, and informed choice should lead to discussions about the possible difference between a pedestrian—who is literally an innocent bystander—and the occupants of an autonomous vehicle—who have voluntarily climbed aboard. By choosing to ride in an autonomous vehicle, passengers have access to a level of safety and convenience that is unparalleled in other forms of transportation. Should those voluntary passengers in the autonomous vehicle—who are directly benefiting from the technology—bear some slight additional risk over pedestrians when a forced-choice scenario arises, even if the pedestrians are fewer in number than the passengers in the autonomous vehicle? Do they have a duty to protect innocent bystanders from harm? Moreover, does boarding an autonomous vehicle constitute implicit agreement to assume slight, but additional, risk? If so, to what extent, and how will autonomous vehicle passengers be informed about any additional level of risk they might be assuming?

Of course, manufacturers might object strenuously to codifying that risk ratio into the forced-choice algorithm. As a research scientist at the Virginia Transportation Research Council and others point out, the owner of a private autonomous vehicle might reasonably assume that the vehicle would preferentially protect its own occupants.54–56 Passengers’ knowledge that autonomous vehicles may be programmed to prioritize the other vehicle or pedestrian in some forced-choice scenarios might well have a chilling effect on their enthusiasm as riders. Consumer acceptance might plummet, because passengers may prefer to take their chances with a live driver rather than ride in an autonomous vehicle in which the odds are, even slightly, stacked against it and them.

Similarly, one could argue that a taxicab for temporary hire should prefer the safety of its own passengers during the time that the paying passenger is in it. And, from a public health perspective, if concerns about self-preservation lead fewer people to ride in autonomous vehicles, then society as a whole would experience a net loss as more people will die from manned vehicle accidents, clearly running counter to the goals of public health. Of course, if all cars in a community were autonomous vehicles, and all autonomous vehicles in that community were programmed to give priority to the other vehicle in case of a tie in the number of likely deaths—an algorithm ethically well-grounded in beneficence—we would end up with a net gain of lives saved. However, it seems very unlikely that (1) all cars in a community will be autonomous any time soon and (2) autonomous vehicle manufacturers, or consumers, would find this forced-choice algorithm acceptable. Nevertheless, these somewhat far-fetched scenarios can illuminate important ethical distinctions and warrant open, rational discussion.

In response to the forced-choice dilemma, some autonomous vehicle supporters claim that autonomous cars will never get themselves into such forced-choice situations, so such discussions are merely intellectual exercises without practical application. They optimistically assert that, like omniscient, omnipotent beings, autonomous vehicles will be able to anticipate danger far enough ahead to avoid every potential mishap. Although eventually we might reach such a state of autonomous perfection, programmers right now are determining how the autonomous car should react under various conditions and are implicitly applying various ethical assumptions. They recognize that the vehicles are not perfect and will not be any time soon. What are we to do in the interim while the autonomous vehicles are “learning” from their mistakes?

The projected reductions in morbidity and mortality from autonomous vehicles not only assume a near-ideal implementation, with few if any mechanical or software failures, but they also assume that forced decisions are being made now using solid logic grounded firmly in broadly acceptable ethical precepts. We must deal with these challenges by engaging in informed discussion using well-justified frameworks and accepted principles of public health ethics and by asking the right questions now so that manufacturers, stakeholders, and the government develop guidelines for algorithms, policies, laws, and regulations that promote fairness and equity and align with the values of public health.

A CALL TO ACTION

I have discussed autonomous vehicles and public health ethics, focusing on the design-stage example of forced-choice algorithms and arguing that there is an important and immediate role for public health expertise, advocacy, and community engagement in the discussions about autonomous vehicles. Public health leaders can focus on 4 pragmatic areas with ethical impact, including (1) advocating transparent and collaborative discussion of public health issues related to autonomous vehicles, starting with the forced-choice algorithms under development by manufacturers; (2) expanding the public’s awareness of the ideals of public health and ethical issues relevant to autonomous vehicles; (3) facilitating the inclusion of broad perspectives—including the historically disenfranchised—in the discussion of issues, including community input into when, where, and how autonomous vehicles are tested and deployed; and (4) ensuring that rational, ethically justifiable regulations are developed consistently across states, codified by the appropriate government agency, funded appropriately, and implemented, monitored, and assessed effectively.

Public health leaders should welcome autonomous vehicles as an incredible innovation that will likely transform transportation, especially in urban environments, while saving lives. It is incumbent on public health experts to keep pace with the evolving technology, lead and participate actively in informed discussions, engage communities broadly, advocate rational and consistent regulations, systematically analyze ethical issues, and insist that outcomes be measured and disseminated effectively. It is only through early and consistent engagement that public health leaders will ensure that their unique skills, knowledge, values, and perspective take the lead in the important ongoing conversations about autonomous vehicles.

HUMAN PARTICIPANT PROTECTION

No human participant protection was required because this work did not involve human participants.

Footnotes

See also Goodall, p. 496.

REFERENCES


Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES