Lawyer Uses ChatGPT, Submits Legal Brief Packed With Inaccuracies

The rapid rise of artificial intelligence has resulted in widespread concerns about the technology’s impact on society.

One major issue involves the spread of false or misleading information. Thus far, this problem has taken on a partisan tone as conservatives complain about an apparent left-wing bias within the algorithm of popular chatbots and liberals fret over the supposed proliferation of right-wing misinformation.

As it turns out, the real threat could come in the form of “hallucinations.”

Experts warn that artificial intelligence occasionally seems to simply make things up and regurgitate false information with the same confidence that it presents legitimate facts.

Attorney Steven Schwartz learned this lesson the hard way after he relied on ChatGPT to assist in creating a legal brief as part of a lawsuit against Colombian airline Avianca. The result was a document filled with inaccuracies.

In fact, several of the court rulings cited by ChatGPT — like Miller v. United Airlines and Martinez v. Delta Airlines — are entirely fictional. The AI chatbot asserted that “reputable legal databases” provided details about these fake cases.

Schwartz said he “was unaware of the possibility” that the chatbot could provide false information. He submitted an affidavit describing his use of ChatGPT as a “supplement” to his work.

Nevertheless, he is facing sanctions for the errors included in the brief. He is scheduled to appear for a hearing on June 8 during which he could face penalties for his actions.

“I greatly regret using ChatGPT and will never do so in the future without absolute verification of its authenticity,” Schwartz wrote.

This case is being described as the first of its kind to use AI in the creation of a complex legal brief. The notion of such high-tech hallucinations is nothing new, though.

According to Brookings Artificial Intelligence and Emerging Technology Initiative Directo Chris Meserole, AI chatbots “are not actually trained to present truthful information.”

Instead, he explained that developers design the platforms to “present text that is of human quality intelligence and reasoning capability,” adding: “It’s a slight distinction but a really important one.”

Sam Altman, the CEO Of ChatGPT’s parent company OpenAI, made it clear a short time after the chatbot debuted that users should not rely on its answers “for anything important,” noting that it can easily “create a misleading impression of greatness.”