Select Page

Uber is in hot water again after one of their self-driving cars hit and killed a pedestrian in Tempe, Arizona. Forty nine year old Elaine Herzberg was crossing a highway with her bicycle when the Uber failed to stop for her and hit her at full speed. Although the car was on self-driving mode, there was a human driver in the car to keep an eye in case the program malfunctions. The family of the late woman has sued Uber for the accident and is campaigning for much steeper regulations around artificial intelligence and the testing required before a self-driving car is considered roadworthy. For now, Arizona’s governor has banned the testing of autonomous vehicles while the investigation continues and the guidelines are revisited.

This tragedy has brought back to light issues surrounding insurance, AI, and general road safety altogether. This isn’t the first time that a self-driving automobile has caused problems on the road. A few of Google’s self-driving cars crashed in 2016, but Google attributed most of the problems to faulty human drivers failing to follow the rules of the road — and none of them were fatalities.

Traffic engineers and programmers alike have long awaited the day when cars would be self-driving for a number of important reasons. First and most importantly, the vast majority of car accidents are caused by some sort of human error, from distracted driving to miscalculations. While people tend to consider themselves good drivers, their math for reaction time and speed can be imperfect, causing accidents with wildlife or other drivers. Because of the ability to “perfectly program” cars to account for more information than a person could process and calculate accordingly.

However, a whole new set of problems arise legally. For one, self driving cars aren’t interacting with other perfectly-programmed cars — they’re still interacting with human drivers, bikers, and pedestrians, which are deeply imperfect creatures. These cars aren’t yet perfectly equipped to “see” the way that humans can. In this specific case, the car couldn’t tell that the object on the road was a human — most people would have been able to decipher that. NASA has long done work on helping robots to “see” better when lights or dim or when objects are foreign, but even NASA admits that its work is far from done.

Self-driving cars are also programmed with answers to difficult philosophical and ethical questions that not everyone can agree on. For example, the AI that controls a car may have to decide between killings its passengers or killing others on the road in the heat of a moment leading up to a crash. Lawyers and futurists have said since the beginning of the tech boom that, inherently, technology is not neutral, and as Facebook and Uber are both literally seeing their days in court, the tech-using public is discovering this issue.

In addition, many auto insurance brokers are now wondering how they will account for self-driving cars in policies, both for human and AI drivers. When it was only human drivers, it was easy to assign blame during an accident, but who assumes blame when one car has no driver? Some have speculated that the programmer and manufacturer will have to assume liability, since their faulty programming may  be to blame for an accident.

Most companies testing driverless cars have ceased road testing out of respect for Herzberg but plan on resuming once they feel their programs are ready to handle the rigors of the real world.