How can artificial intelligence change property/casualty insurance claim processing?
Two industry experts answered that question from a few angles during Carrier Management’s annual InsurTech Summit. Frank Giaoui, founder and CEO of Optimalex Solutions, and Shane Riedman, general manager and vice president of anti-fraud analytics at Verisk, shared how their companies tap AI to improve claims management.
Benefits and Opportunities
Riedman sees the immediate opportunity in current AI technology as accelerating and augmenting — not supplanting — what he described as “reasoning and judgment activities” that require human intervention. Large language models could help to solve workforce challenges, for example, as knowledge and skill sets on the insurance industry’s front line erode.
Giaoui said insurance claims hold “the next largest opportunity for insurance carriers to reduce their loss ratio.” He launched his company, which uses AI predictions to reach fair, timely and consistent settlement values, after realizing that when damages exist but are difficult to quantify, “they are randomly compensated,” he said.
Related Article: InsurTech Profile: Optimalex’s AI Tools Co-Pilot Alongside Human Expertise
“Most of the time, they are not compensated,” Giaoui continued, “but sometimes they are super overcompensated. And in both situations, it’s very unfair. It’s unfair socially, but it’s also very inefficient.”
Verisk uses AI to detect claims fraud. Riedman said the threat landscape has evolved in the past year, and he pointed to advanced forms of image manipulation in claims that enable bad actors to create “images from whole cloth.”
“At Verisk, we see ourselves in a virtual arms race with these bad actors,” he continued. “So, we’re developing detection mechanisms using some of that same AI technology.”
He also described a separate Verisk solution that combines machine learning with large language models to accelerate and enhance the accuracy of the detection of fraud, waste and abuse from medical providers. Machine learning analyzes the medical provider’s pattern of practice, and a large language model actively reads the medical records and pulls out key pieces of information.
“We don’t want a machine to make a determination as to whether a medical provider is committing fraud,” Riedman said. “But what our machine can do is accelerate that fraud detection process from what would have normally taken days or weeks to a matter of hours. And we’re doing that today.”
Using AI to Combat Nuclear Verdicts and Social Inflation
While the plaintiffs’ bar has embraced AI and data sharing, insurance carriers have been reluctant to share data, Giaoui said, adding, “I think this is really an old-fashioned way.”
If provided data is anonymized, secured and compliant with legal security requirements, “then we, as an industry, insurance carriers, should be able to have access to federated learning.” At Optimalex, insurers don’t have direct access to external data, but they can tap into learning models that use outside data to make predictions.
“Optimalex never sells any data,” he said. “We use the data, both internal and external, to provide the prediction.” These data-point-based predictions can be used for early claim triage, risk mitigation and litigation management.
AI assists with time management and workflow automation, he said. Optimalex has quantified with its clients that between 25 percent and 60 percent of claim adjusters’ time — depending on the situation — is focused on claims documentation, not resolution.
Too much time is spent on “data collection, processes, communication, internal back and forth, or sometimes with the insured,” Giaoui said, noting that “some part of this data collection can be automated, and we can structure data to essentially save the time of the claims expert so that they can focus on the actual claims resolution where their expertise cannot be automated and will never be replaced by artificial intelligence.”
Testing for AI Bias
Riedman said that testing AI for bias should be “top of mind for every insurer.”
Verisk utilizes a multi-pronged approach. The company begins by assessing whether a data input could be a proxy for any protected status or race. For example, the company does not and will not use drug conviction data in analytics because “we know that’s often a proxy for race,” Riedman said.
He added that Verisk also has a rigorous peer review process in which other experienced data scientists analyze the company’s code and modeling to check for latent or inherent bias. Verisk has also developed technology to check model bias.
Go Deeper
View the full panel discussion on the InsurTech Summit 2024 website.
Was this article valuable?
Here are more articles you may enjoy.