Last year, 40,000 Americans died in car crashes. Another 4.5 million were seriously injured. And every seven seconds, another person was hurt on the road. These numbers are astronomically high. For reference, they’re more than double the number of deaths caused by prescription opioids in 2017. But that’s not the worst of it.
The saddest fact about these statistics is that the vast majority of these deaths and injuries due to car crashes were entirely preventable. Reports by the National Highway Traffic Safety Administration (NHTSA) reveal that human error is the “critical reason” for 94% of all crashes. Just three factors — speeding, drunk driving, and phone use while driving — contribute to about 80% of all deaths.
If it weren’t for these mistakes, thousands would still be alive today.
This should have everyone thinking: If driving is so dangerous, why are we so willing to get behind the wheel? And how can we maintain our standards of mobility without leaving behind a sea of casualties?
Policymakers have responded by raising penalties on reckless driving. But Silicon Valley has a different answer. Its solution: self-driving cars.
Equipped with a comprehensive sensory system of radars, cameras, and lidars (light-detecting devices), completely autonomous vehicles are capable of making decisions without any human input whatsoever. And, by removing humans from the driver seat, this technology could make our roads safer for everyone.
Many remain bullish on self-driving cars, and investors are pouring billions of dollars into driverless technology. Just last month, Volkswagen invested $2.6 billion in Ford’s self-driving division, Argo AI. In May, Cruise Automation, General Motors’ autonomous outfit, raised over $1 billion. Beyond car manufacturers, tech companies like Uber and Lyft have successfully raised billions around the idea. And well-established corporations like Google and Tesla have been developing driverless vehicles for over a decade with some of the best engineering talent in the world.
But as it turns out, autonomous vehicles can make mistakes too.
The driverless technology, at least for now, doesn’t fully work. In March 2018, a self-driving Uber Volvo struck and killed a pedestrian in Tempe, Arizona, resulting in the suspension of Uber’s autonomous program. The car first misidentified its victim, Elaine Herzberg, for another vehicle, and then a bike, before striking her as she jaywalked. While the Uber crash is the first pedestrian death attributed to a driverless vehicle, three drivers have died to date, all in Teslas with the Autopilot mode engaged. Reporters have remarked that at least two of the crashes were eerily similar, raising worries that despite disasters, Tesla hasn’t fixed a persistent bug in its code.
And while these numbers are dramatically lower than accidents caused by drivers, these crashes have put a damper on public trust.
According to a 2019 Reuters poll, over half of US adults think autonomous vehicles are more dangerous than human-driven ones. The majority think these vehicles should be held to higher standards than conventional mobiles. And 64% wouldn’t even buy one.
Many are critical of the government for not doing enough to protect against the risks of self-driving cars.
No federal safety standards for autonomous vehicles have been enacted. Nor has Congress passed any legislation concerning driverless cars. Currently, companies aren’t required to conduct safety reports or publicly disclose how well their cars are performing. All this is in spite of growing industry demands for federal guidelines.
States have been more involved, but it is hard to say if they’re moving in a positive direction. At least 41, excluding D.C., have considered legislation pertaining to autonomous vehicles. California, Nevada, and Arizona have even permitted testing autonomous vehicles without human safety drivers. But states are divided at the level of oversight they require. While California demands that companies “report the number of miles driven as well as the number of disengagements,” or times a human driver took control from the autonomous system, the same does not apply in Nevada.
Safety concerns are not the only reasons behind public distrust. There are fears of job loss, with more than 4 million drivers who may find themselves unemployed if driverless technology goes mainstream. There’s also uncertainty about co-existence: What might a road with both self and human-driven cars look like?
And there are philosophical worries too: If it had to make the choice, should an autonomous vehicle swerve and kill its passenger or run over a rogue pedestrian? This is the latest rendition of the decades-old ‘trolley’ problem (such a famous moral dilemma that entire Facebook pages are dedicated to producing memes about it). In the classic trolley dilemma, a spectator is required to choose between letting an oncoming trolley run over five people or pulling a lever to divert it so that it only kills one. What would you do?
In 2018, researchers designed an experiment to survey responses to the problem. They designed a viral online game, The Moral Machine, that depicted self-driving cars in various scenarios and gave users binary choices. Should the car run over three elderly pedestrians, for example, or three youthful passengers? Their results varied widely. Participants from Western and Latin American countries, in general, were more likely to save the young compared to Eastern countries. The researchers hypothesized that cultural values influence moral intuitions, pulling the decision one way or the other.
The relativity displayed in such situations is troubling. Should car manufacturers program driverless cars to run over elderly pedestrians or spare youthful passengers? Will programmers imbue autonomous vehicles with Western values for the entire world? Are manufacturers going to produce different cars for different regions altogether?
One way to understand the trolley problem is to dismiss the prospect of a universal solution. On this view, the point of the trolley problem is simply to expose the diversity in human moral thought. Searching for a solution is pointless because the dilemma is designed to not have any.
If proponents of this theory are correct, then rather than posing an impassable moral question for autonomous vehicles, the trolley problem shouldn’t really be a problem at all. There’s no right answer! And even though trolley-like situations arise in almost every context, that doesn’t mean we should arrest technological advancement.
The trolley problem aside, there is reason to believe that other public fears about self-driving cars are overblown. A study by Intel and Strategy Analytics found that autonomous vehicles could add up to $7 trillion to the economy by 2050. That’s likely to generate thousands of jobs and bloom adjacent industries, rebutting critiques about job loss. And the fears of human-robot coexistence could be managed by phasing in driverless cars slowly, first limiting them to certain lanes or highways and allowing them to proliferate across the country later.
Indeed, self-driving cars could avert our current public health crisis on the roads. And there are reasons to be bullish about the technology, but we must have realistic expectations about where we are. But if we still haven’t patched the rare but recurring bugs, we’ve got a long way to go.