Uber’s self-driving car accident and the dilemma of responsibility

3 min read
Handout photo of NTSB investigators examining a self-driving Uber vehicle involved in a fatal accident in Tempe
National Transportation Safety Board (NTSB) investigators examine a self-driving Uber vehicle involved in a fatal accident in Tempe, Arizona, U.S., March 20, 2018. A women was struck and killed by the vehicle on March 18, 2018. National Transportation Safety Board/Handout via REUTERS

Last Sunday, March 18, 2018, may mark a significant day in the history of artificial intelligence. Aside from the headlines that still cover the news, Uber’s fatal self-driving car crash will be remembered in endless keynotes and infographics and whatnot for years to come.

After all, this was the first time on record that an autonomous car killed a pedestrian.

Despite the public uproar that ensued, this was something that everyone in the industry expected but most turned a blind eye on.

The hotchpotch of regulations and perspectives

More than 40 states have passed different rules presiding over autonomous vehicles’ testing and certification. The National Highway Traffic and Safety Administration (NHTSA), which oversees the flourishing driverless car space under the Motor Vehicle Safety Act, has in sofar deferred to the states.

In 2016, when a Tesla’s Model S electric Sedan scored the first fatal accident involving an autonomous car by killing the driver while his car was in autopilot mode, the National Transportation Safety Board, NTSB, and NHTSA came to different conclusions. NHTSA not only cleared Tesla of any fault but also praised its design in terms of safety and highlighted that crash rates involving Tesla cars have dropped by almost 40 percent since the widespread introduction of Autopilot. Meanwhile, NTSB pointed out how Tesla’s design flaws permitted for the driver’s overreliance on its autonomous capabilities and stated, “The way in which the Tesla Autopilot system monitored and responded to the driver’s interaction with the steering wheel was not an effective method of ensuring driver engagement.”

The folly of level 3 driverless cars

Now, you may be already familiar with the five levels of autonomous driving. These guidelines define the level of responsibility an autonomous vehicle is able to take and how much human oversight and intervention is necessary and expected.

And here is the thing with autonomous vehicles operating at level three. By definition level three is where drivers are still necessary, but are able to completely shift “safety-critical functions” to the vehicle under certain traffic or environmental conditions. It means that the driver is still present and will intervene if necessary, but is not required to monitor the situation in the same way it does for the previous levels.

And that is exactly the dilemma of responsibility we face. If you have ever engaged in a task that requires continuous focus while interaction between you and the task at hand is a rare happening, you have experienced the “vigilance decrement” phenomenon, although you may not have heard about the theory. Anywhere humans are expected to spend long periods of monotonous time watching for signals that are easy to spot but impossible to predict, their attention and vigilance drops very quickly. It’s like standing guard looking to the horizon and waiting for the one-second appearance of an object that may happen every 24 hours.

The vigilance decrement drama was one of the main reasons why driverless cars could save lives, since they don’t get exhausted, burned out, or lose focus. Now, how do we expect humans to keep watch on AVs when the environment is way more apt for them to lose focus?

And that is the main reason why Ford has been pushing for autonomous level 4 skipping level 3 altogether.

This may seem a bold move but Ford’s product development chief Raj Nair said a year ago in an interview with Bloomberg, “Company researchers have tried to roust the engineers with bells, buzzers, warning lights, vibrating seats and shaking steering wheels. They’ve even put a second engineer in the vehicle to keep tabs on his human counterpart. No matter—the smooth ride was just too lulling and engineers struggled to maintain situational awareness.” John Krafcik of Waymo, Alphabet’s self-driving outfit, is of the same opinion. In late 2016, after the Tesla Model S fatal crash, in an interview with Bloomberg he reiterated the “need to take the human out of the loop.”

Meanwhile, the Arizona governor called Uber’s crash an “unquestionable failure” and suspended it indefinitely from autonomous testing.

But without a comprehensive and nationwide regulatory framework that sets the legal basis for responsibility, companies will try to hand over responsibility to drivers through fine prints in legal agreements and governors will react in retrospect instead of proactively and responsibly paving the way for a technology that can save many lives.

But there is also good news. Last year the House passed the Self Drive Act, putting the NHTSA in charge of regulating self-driving car safety and preempting competing rules at state level. A similar bill called the AV Start Act has already passed the Senate Commerce Committee and was introduced to the Senate awaiting consideration.

There is nothing new to the law lagging behind technology. From the first copyright laws in England back in the 17th century that took 300 years from the invention of the printing machine to EU’s latest privacy and data regulation laws, our legislature is slow by design.

Gandhi once said that laws are “codified ethics.” We are in an era where technology advances at exponential rates and humans experience the same change that happened in a few generations in just a few years. Our minds may not be able to “unlearn and relearn” fast enough to grasp and create the required ethics in time. Maybe we should rethink our legal system in a way that not only enables it to proactively regulate new and fast moving technologies but also takes responsibility and paves the way for inventions that can save thousands of lives.

1 COMMENT

  1. Did you see the video record of the accident? The pedestrian crossed the road right when the car was passing in pitch black darkness. The driver had zero time to respond so I don’t think that the autonomous system could have done any better. And when it comes to Tesla, only 1.5 out of a million Tesla owners ever involve in an accident.

Leave a Reply