True autonomous vehicles will likely be safer than humans on the road. There’s a lot of hype around producing this science-fiction-turned-reality, with over 50+ companies approved for on-the-road testing.
But they’re not going to be perfect, so what happens when the inevitable occurs?
Let’s say in a ‘worst case scenario’, an autonomous vehicle hits and kills a pedestrian who was legally crossing the road.
If a human was at the wheel, that would be vehicular manslaughter. But in this scenario, with a truly self-driving car (not Telsa’s glorified autopilot), a person would not be behind the wheel.
There’s a critical disconnect here between passenger and digital driver; most decisions the car makes are now out of human control. It would be as if I were driving you, a passenger in the back seat.
If I cause an accident, you’re not liable for negligence. Except now I’ve been replaced by an algorithm. So who is?
There’s 4 immediate options:
- The car
- The company that manufactured the self-driving car
- The government
Let’s immediately discount the vehicle. You cannot hold the car responsible because you cannot punish it or extract recompense in any way.
Perhaps, one day, if the AI powering the car is sentient, and granted moral status and legal rights and responsibilities, then we can talk. But until then…
Next up, the company. If you get into a car currently powered by top self-driving companies such as Waymo, Cruise, Nuro, or Argo AI, they’re generally still under beta-testing and so the passenger is not currently personally responsible.
Yet the manufacturer is still more likely to be at fault than the passenger long-term, as they designed, developed, and tested the car that caused the accident.
However, some changes may need to be made to the way cases are handled if product liability claims became common substitutes for auto collision. This is because product liability is expensive, difficult to pursue (requiring ‘expert testimony’), and can be mitigated by comparative and contributory negligence laws.
While the law may need to evolve with technology, it’s not unreasonable to think injured parties may pursue legal action against the company that produced the defective autonomous vehicle.
Alternatively, the manufacturer could automatically assume liability, but include that within the cost of the car.
For example, Volvo said it would be responsible for accidents caused by vehicles in Volvo’s fully autonomous mode. While this would resolve the question of responsibility, this significant price hike (of potentially between $10-15,000) would also make driverless cars less desirable.
It is probable that the company that produced the car will take on a greater responsibility for what happens after it leaves the dealership, but to what extent is yet unclear.
Our government has several incentives to subsidize and support the self-driving industry.
Over 3,100 people were killed, and 424,000 injured due to distracted driving in 2019. Driverless cars could reduce these numbers significantly, leading to safer traveling for ordinary citizens and fewer accidents for law enforcement to field.
Additionally, autonomous vehicles could bring $488 billion in annual savings from reducing accidents and another $158 billion in savings due to reduced fuel costs for our economy.
Companies like Nuro have been pushing for new laws and altered requirements that better fit the new road order of AI-powered cars. For example, a driverless car may not need rearview mirrors.
Some safety laws have already been passed and put into effect; ‘manufacturers of vehicles with automated driving systems must report any and all crashes promptly’ as of June 2021.
It’s possible that the government could pass legislation that helps with liability for self-driving cars. However, I currently view this as unlikely. Rather, they’ll probably continue to invest heavily in autonomous vehicle research and policy.
Finally, you could be responsible. You didn’t directly cause the accident, but you did choose to ride in a driverless car. ‘Negligence’ could mean using self-driving features in inappropriate situations, failing to follow safety rules, or not installing important updates.
However, in any scenario in which self-driving cars are mainstream, I doubt people will be held responsible for car accidents to the same extent as if they had been at the wheel. Adoption would be severely hampered if in order to use such technologies you could potentially be tried for (any degree of) murder (as happened in a 2018 case with an Uber tester ).
This may lead to an entire industry surrounding a new kind of car insurance for using autonomous vehicles.
The Society of Automative Engineers defines 6 levels of autonomous vehicles, from Level 0 (ordinary, human-driven car), to Level 5 (fully self-driving car). Level 1 includes adaptive cruise control, Level 2 adds advanced driver-assistance systems, and 3-5 increase the car's control over acceleration, breaking, and direction.
According to Progressive, cars with partial automation such as Tesla's Autopilot fall under Level 3: ‘Conditional Driving Automation’.
Insurance plans for these semi-autonomous vehicles remain similar to or above ordinary premiums due to being new tech, with high potential for damages. Elevated mandatory minimums may even cause premiums to increase (temporarily?) for liability and property damage.
Once owning fully-autonomous vehicles becomes possible, car insurance could drop in cost as the manufacturer assumes more liability. This in turn could lead to faster adoption of autonomous vehicles. 35% of people surveyed responded they'd buy a self-driving car if it came with lower insurance rates.
Companies such as Koop Technologies, Rivian, and Direct Line are already working to fill the emerging market of autonomous auto insurance. One in particular, Avinew, works to offer people coverage through usage-based functionality.
The human in the vehicle has always been the primary candidate for liability up until this point. It will be interesting to see if that changes tremendously.
So, who’s responsible?
Ultimately, there may be a decision tree integrating a combination of possible entities to be held responsible based on context.
Based on my (admittedly limited) knowledge of the space, this is my current prediction for what will happen when an accident occurs:
- If you’re in a semi-autonomous vehicle with ‘retained control’ (the most common today), such as a Tesla, you’re currently responsible as the driver. Your auto insurance plan will be engaged.
- If you’re in an autonomous ride-sharing Uber/Lyft style vehicle, the company is at fault in the same way it would be today (essentially taking responsibility from the driver). They’ll have insurance plans of their own, as Waymo does through its partnership with insurer Trov. This is a ‘fleet insurance model' enabling companies to self-insure or add cars to a corporate policy covering liability claims.
- If you’re in a personal, fully autonomous vehicle when an accident occurs, three things could happen based on context:
- If it’s clear the other party is at fault, you’re fine.
- If the vehicle malfunctioned in some way, it’s the manufacturer’s responsibility. This could include improper assembly or a lack of safety-testing.
- Otherwise, your self-driving insurance will cover it in a similar manner to as if you caused the accident, except you cannot be personally given criminal charges.
While this is fairly high-level, our policies need to cover more than just fatal accidents, from a fender-bender to hitting a mailbox. Assigning blame after-the-fact is troublesome if there are no regulations already in place.
A clear understanding of responsibility will be necessary for autonomous vehicles to become mainstream. What we end up seeing implemented (soon?) could be incredibly different from what I outlined.
Much like the original automobile, autonomous vehicles have the ability to change the world for the better. As we innovate in our technology, we must also innovate in related policy.
Thanks for reading :)