The recent deadly crash of an Uber autonomous car in Arizona highlights the evolving risk related to artificial intelligence (AI). A newly released report by Allianz Global Corporate & Specialty, The Rise of Artificial Intelligence: Future Outlook and Emerging Risks, examines future risks and implications and discusses the difference between weak and strong AI applications.
Some obvious risks, highlighted by the report, include the increased connectivity among autonomous machines which could lead to more frequent, widespread cyber losses. While AI could reduce cyberattacks, it could also be used to enable them.
The authors noted that in an AI connected machine world, a single machine could be used to repeat the same attack “leading to an unforeseen accumulation of losses.”
Liability was one of five areas of concern related to advanced or “strong” AI applications outlined by the report. The other four include software accessibility, safety, accountability and ethics.
AI and Liability Concerns
The AGCS report noted that while AI could reduce auto accidents by 90 percent, it brings up liability and ethical questions.
New liability insurance models will be adopted, the authors wrote. For example, “AI decisions that are not directly related to design or manufacturing but are taken by an AI agent because of its interpretation of reality, would have no explicit liable parties, according to current law. Leaving the decision to courts may be expensive and inefficient if the number of AI-generated damages start increasing.”
According to the paper, before products actually come to market, it is difficult to identify the impact of AI technologies. One solution, is “to establish an experts-based agency with the purpose of ensuring AI safety and alignment with human interests. The agency would have certification powers and would establish a liability system under which designers, manufacturers and sellers of AI-based products would be subject to limited tort liability, while uncertified programs that are offered for commercial sale or use would be subject to strict joint and several liability.”
In addition, liability doesn’t have to arise only from product defects, the paper noted it can arise from communication between two machines or between the machine and infrastructure.
A new liability model may resemble product liability, where manufacturers take over liability for product defects, the authors stated. Compulsory auto insurance will likely include product liability, they added.
Technology and patent attorney Coleman Watson says new laws regulating AI software are needed.
“The law is light-years behind rapid advancements in technology, such as AI, because enacting laws necessarily requires a majority of the members of various state legislatures to come to a common agreement on any given issue. That alone is difficult to achieve in our current political environment,” said Watson, who is managing director of Watson LLP. “As a result, there are no current laws that are solely directed to AI. Instead, there are general laws, oftentimes written decades before the development of AI, that courts attempt to apply. For example, there have been a number of cases in the past several years against robotics manufacturers, and those cases have been resolved by applying products liability law.”
He said the issue frequently turns on whether the robot or software was dangerous or defective when it left the manufacturer’s hands.
“The problem, though, is that AI reinforces itself by learning from its own past experiences. In other words, AI will make an adjustment on its own, to make itself more efficient. And that means that at the time of any injury, the robot might not be the ‘same robot’ that left the manufacturer’s hands,” Watson explained.
The Orlando-based attorney said the current legal system is ill equipped to address AI-related cases.
“To me, this is a three-part answer. The first issue to accept is that AI systems are already everywhere, ranging from apps that allow you to deposit checks into your banking account by using the camera on your cell phone, to Snapchat filters that you use to make entertaining photos to share with friends,” Watson said. “The second issue to accept is that AI reinforces itself by learning from its own past experiences, generally without much guidance from humans. The third issue…is that our legal system is currently capable of interacting with AI only to the extent of imposing liability on the company or person who developed or manufactured the AI. And even then, liability is not automatic because the company or person could very well have fully complied with all regulations when developing or manufacturing the AI.”
Watson said there is no outer legal limit on how far AI can go because by the technology is artificial.
“Our legal system consists of criminal and civil law. Criminal law focuses entirely on mens rea (i.e., intent), and an artificial system is incapable of forming criminal intent. And in the civil context, because AI reinforces itself, without humans, by learning from its own past experiences and then makes an adjustment to improve efficiency, then there is no fault by any humans. As a matter of tort law, this makes it unlikely to establish foreseeability of injury,” said Watson.
Trust Factor
The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances, according to Missouri University of Science and Technology researchers.
“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” stated Dr. Keng Siau, professor and chair of business and information technology at Missouri S&T, and Weiyu Wang, a Missouri S&T graduate student in information science and technology. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”
Though there’s a strong future for AI, Siau said it is fraught with trust issues that must be resolved.
“Trust building is a dynamic process, involving movement from initial trust to continuous trust development,” Siau and Wang stated in “Building Trust in Artificial Intelligence, Machine Learning, and Robotics,” an article published in the February 2018 issue of Cutter Business Technology Journal.
The article examines trust concepts in the context of AI applications and human-computer interaction. They discuss the three types of characteristics that determine trust in this area – human, environment and technology.
Siau and Wang offered examples on ways to build initial trust in AI systems:
- Representation. The more “human” a technology is, the more likely humans are to trust it. Siau said it’s easier to “establish an emotional connection” with a robot that looks and acts more like a human or a robotic dog that acts more like a canine. He suggested first-generation autonomous vehicles could have a humanoid “chauffeur” behind the wheel to ease concerns.
- Image or perception. Science fiction books and movies have given AI a bad image, said Siau. People tend to think of AI in dystopian terms, much like Terminator or Bladerunner movies, they explained.
- Transparency and “explainability.” If technology’s inner workings are hidden in a “black box,” that opacity can hinder trust, the researchers said. “To trust AI applications, we need to understand how they are programmed and what function will be performed in certain conditions,” said Siau.
Recently, Tesla emphasized transparency as the reason it spoke out about the latest crash involving its Model X under investigation by the National Transportation Safety Board.
Once trust is developed, the researchers said that creators of AI must work to maintain it. Siau and Wang suggested several options for “developing continuous trust”:
- Usability and reliability. AI “should be designed to operate easily and intuitively,” according to Siau and Wang. “There should be no unexpected downtime or crashes.”
- Collaboration and communication. AI developers want to create systems that perform autonomously, without human involvement. Developers must focus on creating AI applications that smoothly and easily collaborate and communicate with humans.
- Sociability and bonding. Building social activities into AI applications is one way to strengthen trust. A robotic dog that can recognize its owner and show affection is one example, the researchers said.
- Security and privacy protection. AI applications rely on large data sets, so ensuring privacy and security will be crucial to establishing trust in the applications.
- Interpretability. The ability for a machine to explain its conclusions or actions – will help sustain trust.
- Goal congruence. “Since artificial intelligence has the potential to demonstrate and even surpass human intelligence, it is understandable that people treat it as a threat,” Siau and Wang said. “Making sure that AI’s goals are congruent with human goals is a precursor in maintaining continuous trust.”
Policies governing how AI should be used will be important as technology advances, the authors added.
Was this article valuable?
Here are more articles you may enjoy.