Skip to main content
Subrogation Blog

AI in the Courtroom: How ChatGPT Helps & Hurts Insurance Subrogation

How has the advent of large language AI affected the legal industry? Is there a use for it in the fields of insurance and subrogation? In this episode of On Subrogation, RG member partner and experienced subrogation lawyer Rebecca Wright discusses the possibilities and limitations of generative AI applications in the legal realm.

Generative AI (genAI) and large language models (LLMs) like ChatGPT and Gemini have become a part of many people’s daily lives as well as across industries like engineering, academia, and even insurance subrogation. New technology is exciting – social media changed the way courts handle notice, and skip-tracing has improved how we track elusive tortfeasors down.

LLMs are beginning to change how businesses interact with each other as well as their consumers, how professionals use and interact with data, and how insurance companies evaluate the viability of subrogation claims. New applications are sure to come. However, all new tech must be vetted before we call it “safe” to rely on; we must acknowledge the limitations of even the most impressive technology. That’s the major rub with genAI in law.

What is a Large Language Model?

There is a misconception that LLMs like ChatGPT are basically just a super search engine, but this is not the case. LLMs are predictive models; they are trained on billions of web pages, articles, papers, etc to learn how to mimic human speech and writing patterns. They learn how to predict how to present data as if it’s coming from a human and not a search engine. The AI itself is not the search engine. There are reputable and non-reputable sources spit out by a Google search; ChatGPT uses both sources indiscriminately to predict the correct answer.

LLMs are “self-aware” in the way that programmers include a warning at the outset of a conversation with a bot. ChatGPT tells the user it’s a predictive text generator and they should fact check everything it says. It is specifically designed to give an answer that appears confident and correct, much the way a person would. However, all it is doing is predicting the correct text based on the huge amounts of data it was trained on.

AI Has Personality Problems: Cautions & Cautionary Tales

There are three big issues with LLMs that make it risky to use in the legal field: (1) AI can hallucinate, (2) it doesn’t keep secrets, and (3) it is biased.

Hallucinations: ChatGPT & The Case of the Made Up Brief

A New York attorney used ChatGPT to draft a motion he then filed in federal court. Opposing counsel went to look at his citations and could not find any of them anywhere in the legal record. The attorney admitted he used ChatGPT to draft the motion and didn’t think he needed to double check all the sources.

In fact, he said he had tried to locate a few and couldn’t but just assumed the bot had access to sources he did not. What actually happened is the bot made up text that sounded like a legal brief full of legitimate case law based on the style and content of the data it was trained on. When the answer is completely made up, it is referred to as a hallucination.

Leaks: Don’t Expect AI to Keep Your Conversations Private

Those made-up answers, though, are not actually made up. This is because LLMs just absorb data, incorporate it into the model, and then predict a correct response based on that data. But it does not pick and choose the exact correct sources without the exact correct prompt, which is impossible to achieve.

As a result, insurance and subrogation companies should never put any private or sensitive information in conversations with LLMs. Any identifying or confidential information, sensitive case information, etc, could be gleaned by the model and spit back out in an answer any time, anywhere.

Subjectivity Bias: ChatGPT Has Some Old-Fashioned Views

Because LLMs are trained on human-made material, they are not objective. Programmers do what they can to put up guardrails, but technology reflects society, even if it seems smarter. Researchers at MIT, for instance, found that biases are evident in how LLMs perceive professions; ChatGPT consistently assumed that flight attendants are female and lawyers are male. This is a direct reflection of the material it was trained on. It is also a direct reflection of the fact LLMs do not provide objective summations of search engine results but predict the best answer based on trends and patterns gleaned from them.

We can’t use ChatGPT to draft briefs, though companies are working on training models that can write briefs based on real case law. Unquestionably, this is not currently a reality. But there are other opportunities with AI that are improving how insurance companies operate and how subrogation professionals handle claims.

Improving Customer Service

Insurers are using LLMs to optimize their own chatbots to improve customer service. They are developing AI models that can chat in a natural, complex way and be able to retrieve the correct data for the policyholder.

Streamlining Research Efforts

LLMs have also made internet searches more intuitive because you can search in natural language, such as in the form of a question, and the model can convert it to useful search terms behind the scenes. Google’s search AI now offers a summary of main points with links to top-ranking sites, which can save time and targets research efforts more efficiently.

Optimizing Subrogation Potential

In subrogation, developers are working on AI models that can comb through thousands of insurance claims and identify subrogation potential without human intervention. Specialized bots can be programmed to prioritize viable claims, mark ones that need a second look at subrogation potential, weigh the cost of pursuit vs recovery potential, etc. This lightens the load on subrogation specialists and minimizes chances of pursuing non-viable claims or missing opportunities on viable ones.

So while AI is not capable of generating legal documents, it can still be a useful tool for research, customer service, and operational data analysis. As a leader in the tech-savvy subrogation movement, Rathbone Group will be sure to keep up on the newest applications of AI in the legal field.

Curious to learn more about subrogation? Visit Rathbone Group’s Subrogation Blog, YouTube series, and podcast, On Subrogation for educational discussions on important topics, strategies, and developments in the field of insurance and subrogation. For more information on our claims management services or to suggest a new episode of On Subrogation, reach out at info@rathbonegroup.com.