San Diego In the near future, attorneys and the court system will be challenged to sort out who is liable when products equipped with artificial intelligence cause harm to persons or property.
“Artificial intelligence is here, it’s not coming,” said Tim Casey, a law professor who teaches ethics at California Western School of Law. “We have to figure it out in order to provide competent advice to our clients.”
Artificial intelligence or “AI” involves the development of computer systems that perform tasks that normally would be performed by humans. These can include visual and speech recognition, data sorting, and — perhaps the biggest — making independent decisions.
Automakers, appliance manufacturers, job recruitment firms, investment portfolio managers, other businesses are increasingly relying on AI.
Although the promise of AI is improved performance, mistakes will happen. When machines are making the decisions, it may be hard to determine exactly who is at fault.
Professor Josh Davis of the University of San Francisco School of Law believes that American society is at the beginning of “an explosion” in the use of artificial intelligence. He anticipates that attorneys will remain busy resolving disputes over decisions made by machines for years to come.
“I think AI is going to be pervasive,” he said.
James Barrat, the author of “Our Final Invention: Artificial Intelligence and the End of the Human Era,” said the AI we invite into our lives “has a giant potential for abuses.” Some computer databases have been shown to have biases against minorities and women, Barrat said.
Computers learn from humans and can be influenced by their opinions. When AI is used to screen job applications, there is a danger that biases over race and gender car be factored into hiring decisions, said Davis.
According to a 2017 report in American Banker, computer programs used by lenders to screen borrowers are likely to have trouble making credit decisions without inadvertently favoring consumers who live in affluent areas.
“There are all sorts of embedded value judgments in building artificial intelligence,” Davis explained. “As a society we are going to have to figure out who is responsible if we don’t like those value judgments.”
Placing the Blame
Barrat is concerned about the growing use of digital assistants that communicate through speech. If information about someone’s health was overheard by such a device and sold to a database, it potentially could be used to deny that person an insurance policy, he said. Rather than blame the machine, the legal responsibility likely would be placed with the manufacturer.