Who’s Liable When AI Goes Awry and Injures People?

Artificial intelligence running amok and hurting people has been a staple of science fiction for decades. Stanley Kubrick’s film 2001: A Space Odyssey gave us H.A.L. 9000, the onboard computer that goes rogue and starts killing the human crew of a spaceship. But this dystopian outcome is no longer restricted to fiction, as AI-controlled machines, robots, drones, and vehicles are now posing a real threat to human safety. When these AI systems go wrong, who’s responsible?

 

Sometimes AI misbehaves in a distressing but somewhat harmless manner, as when Microsoft’s chatbot Tay went nuts and started spouting racist and conspiratorial epithets at unsuspecting chatters. “Hitler was right,” tweeted the crazy chatbot, whose designers said it was “designed to engage and entertain people where they connect with each other online through casual and playful conversation.” That didn’t exactly turn out so well, and the clearly embarrassed engineering team has since hidden Tay’s messages.

 

In a similar case, an AI robot named Sophia was being interviewed by “her” creator when he jokingly asked if she wanted to destroy humans. The robot answered, “OK. I will destroy humans.” When people in the room begin to laugh nervously, he says “No! I take it back!”

 

More serious incidents can occur when AI is used to control things in the physical world, such as in the case of autonomous vehicles. In 2016, Uber was testing out one of its self-driving cars in San Francisco — where they weren’t permitted by California regulators to do so, incidentally — and they were caught running a number of red lights. While nobody was struck by these cars, other self-driving cars have indeed injured and killed people. In March of 2018, an Uber self-driving car struck and killed a 49-year-old pedestrian in Tempe, Arizona, causing Uber to stop its testing in a number of cities.

 

In the Tempe case, video of the accident indicates that the human backup driver wasn’t paying attention, and now she may be facing charges, which would be a precedent-setting move by prosecutors. But what if Uber wasn’t using a backup driver? Waymo is already testing driverless cars without backup drivers. Who would be responsible in the event that a driverless car crashes without a backup driver, and the AI is technically at fault?

 

Currently, our legal liability laws have nothing to say on this subject, and regulators have not stepped up to pre-emptively formulate such laws. “When it comes to personal injury, cases are decided based on legal precedent,” says Laurence B. Green, attorney and co-founder of the law firm of Berger and Green. Without any precedent for these AI cases, nobody knows for sure who’s liable.

 

Still, some experts have weighed in on the subject. In one academic paper, a British lecturer thinks that the courts might treat the AI as a mentally-incompetent actor without the capacity for criminal intent. Furthermore, he thinks that the programmers and operators could be off the hook if they made an honest mistake.

  • Category: AI