The ethical quandaries of self-driving cars

If a self-driving car has a choice of whether to risk harm to the driver or a pedestrian, which will it choose

Late last week we got a glimpse of what the future of automobiles will look like. The vision includes filling our highways and cities with cars that can automatically steer, change lanes on their own and parallel park once you arrive at your destination. This may sound like something out of science fiction, but it is not. These are the features that Tesla Motors touted for their Model S car when they unveiled the new Autopilot feature last Thursday. It is clear that the age of the fully self-driving autonomous car is approaching rapidly, and the technology is outpacing public debate on the regulatory, safety and ethical issues these vehicles present.

The Model S is a high-tech electric luxury vehicle with a starting sticker price of about $70,000. For $2,500, owners of the car can now wirelessly download the latest 7.0 software update, which includes the autopilot feature. Autopilot allows drivers to cruise along the highway hands and pedal-free. With a driver's flick of the turn signal, the car can change lanes on its own with its Autosteer function. And with an array of cameras and sensors, the Model S can scan the road, adjust its speed, brake and steer.

According to Tesla, the Autosteer function is still in the "beta" testing stage. The company stresses that drivers must "remain engaged and aware" and that they "must keep their hands on the steering wheel." But early reports of test drives tell a different tale. One journalist reports zipping alongside the Potomac River in Virginia while resting his hands on his lap. Another describes his hands-free test drive down New York City's busy West Side Highway.

Strictly speaking Autopilot does not turn the Model S into an autonomous vehicle, like Google's self-driving car, but the technology should remind us that our region's roadways might soon be filled with cars like this in the not too distant future. In March, the Maryland Senate rejected a bill to create a task force to study issues related to self-driving vehicles. The bill would have required a written report to be submitted to the General Assembly and the governor by 2017. The Senate's move was a classic case of policy and law being outpaced by technology. It was also a mistake. By killing the bill, the state legislature turned its back on the fact that while self-driving cars could have obvious benefits, like increasing safety, reducing accidents, and boosting economic efficiency, they, like Tesla's Autopilot, also pose ethical questions.

Imagine the Model S is gliding along the highway on Autopilot. A hitchhiker is on the highway's shoulder, the system encounters a glitch, the Autopiloted car strikes her, and she is killed instantly. Who bears moral responsibility for this woman's death? Is it Tesla? The driver? When asked about legal liability in the event of a failure with Autopilot, Tesla CEO Elon Musk asserted, "the responsibility remains with the driver." While this may be true, there is a difference between being legally liable and morally liable for a harm caused. What is legal is not always ethical, and what is unethical is not always illegal.

It is probable that unlike the current version of Autopilot, which cannot recognize traffic lights and is thus limited to highways, the next generation of the technology will be able to operate in a complex urban environment. Now imagine that in a future scenario the car is driving in downtown Baltimore, and a child steps into the road. Further imagine that the car has three mutually exclusive choices: (1) to hit and kill the child; (2) to swerve, hit, and kill two elderly people; or (3) to swerve and kill the "driver." What should the software be programmed to do in such scenarios? Should it privilege the life of the driver and swerve? If so, to whom should it swerve into? Killing the elderly results in two deaths, and the child is a single death. Sometimes saving two is better than killing one. But it is also possible that the child has many years ahead of her, and the elderly are in their twilight years. Should this factor count in our moral decision-making?

At bottom, questions like these are about what is right and wrong and what guides our decisions when confronted with such difficult choices. There are no easy answers. But as companies roll out semi-autonomous and autonomous technology, they need to engage these ethical issues in an open and transparent way. And as lawmakers eager to lure these companies to their states consider guidelines, policies, regulations, and laws, they too need to consider the interplay between technology, policy, law, and ethics. If lawmakers in Maryland do not take these issues seriously, they will soon be left in the proverbial dust.

Jesse Kirkpatrick is the assistant director of the Institute for Philosophy and Public Policy at George Mason University. He also serves as a Research Consultant for Johns Hopkins University's Applied Physics Lab, where he advises on ethics, technology, and international security. The views expressed are his own. His email is

Copyright © 2018, The Baltimore Sun, a Baltimore Sun Media Group publication | Place an Ad