A Self-Driving Car Has Killed Someone
Why is the Government Trying to Protect Robot Car Companies From Liability?
Just last Sunday, a driverless Uber car struck and killed a pedestrian crossing the street in Tempe, Arizona. This grim event marks a troubling milestone, the first human victim of a robot-controlled car.
Police reports indicate the victim was walking a bike across the street, outside a crosswalk in the evening at 10:00 PM. The Uber was reportedly driving at 40 miles per hour in an autonomous mode, however an operator was in the driver’s seat. The Uber never slowed down, and the pedestrian did not appear impaired. Troublingly, the cause of the accident has yet been determined.
Uber immediately suspended its self-driving tests in Arizona and nationwide. Meanwhile, the tech industry has responded with pleas for similar technology to take a break. Raj Rajkumar, head of Carnegie Mellon University’s leading self-driving laboratory called for tech companies to slow down development.
“This isn’t like a bug on your phone. People can get killed. Companies need to take a deep breath. The technology is not there yet,” says Rajkumar. “This is a nightmare all of us working in the domain always worried about.”
So why, as the tech industry wrings their hands over the safety of these robot cars, are state and federal leaders pushing legislation designed to encourage the rapid deployment of these cars in the marketplace and remove the liability of the companies who make them?
In January of 2018, Waymo, the driverless vehicle technology arm of Google’s Alphabet, received permits to begin offering robot taxi service in Arizona. Waymo had been test-driving their technology around Phoenix for months and planned to launch a full-scale robot taxi service in April; this was encouraged by the Governor of Arizona Doug Ducey as early as 2016.
“Arizona welcomes Uber self-driving cars with open arms and wide-open roads. While California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology and new businesses,” Ducey said at the time.
At the federal level, legislators are pushing hard to create a business-friendly climate for self-driving vehicles. The AV Start Act bill currently under consideration in the Senate does not prohibit forced arbitration between robot cars and consumers. So if a rider or pedestrian is badly injured due to a self-driving car, they would not be able to take part in a class action lawsuit or sue the maker of the technology. Instead, the dispute would be forced into arbitration. This would obviously benefit the car manufacturer because the company would hire the arbitrator and be a repeat customer. It would also make the proceedings private, so the public would be kept in the dark about serious flaws in these self-driving vehicles.
Consumer advocates and personal injury attorneys are crying foul.
“Going back 50 years, I’ve never seen a more brazen attempt to escape the rule of safety law, and the role of the courts to be accessible to their victims,” longtime consumer advocate Ralph Nader told CNN. “With their unproven, secretive technology that’s fully hackable, the autonomous vehicle industry wants to close the door on federal safety protection and close the door to the court room.”
Advocates warn forced signing of terms of service agreements also bind consumers into forced arbitration – a tactic already used by ride sharing companies like Uber and Lyft.
The disastrous collision of a tech race to get self-driving cars on the road and legal liability loopholes for automakers will sadly make the Arizona victim’s robot-car death milestone the first of many.