A US attorney faces disciplinary action after his law firm used the popular AI chatbot ChatGPT in legal investigations and cited bogus cases in lawsuits.
Steven A. Schwartz, representing Roberto Mata in a lawsuit against Colombian airline Avianca Airlines, said OpenAI’s ChatGPT was used for research purposes and that the AI model provided non-existent lawsuit citations. admitted.
In 2019, Mata sued Avianca, claiming employee negligence after he was injured in a catering cart.
Also read: Opera launches GPT-powered AI chatbot Aria
all the way bogus
According to the BBC reportthe matter came to light after Schwartz, an attorney with 30 years of experience, used these cases as a precedent to help Mata’s case.
However, the defendant’s attorneys warned that the ChatGPT-generated quotes were fake. U.S. District Court Judge Kevin Castell conceded that six of them do not exist. He turned to Schwartz, an attorney at the New York-based law firm Levidor, Levidor & Overman, for clarification.
“Six of the lawsuits filed appear to be false judicial decisions with false citations and false internal citations,” Judge Castel wrote in a May 4 document. order.
“The court faces an unprecedented situation.”
is supposed Case Bargeese vs. China Southern Airlines, Martinez vs. Delta Air Lines, Shaboon vs. Egypt Air, Petersen vs. Iran Air, Miller vs. United Airlines, Estate of Darden vs. KLM Royal Dutch Airlines, all for either Judge or Defense exists in
lawyer claims ignorance
ChatGPT is a large scale language model developed by OpenAI. The AI, which was announced in November, has been trained on billions of data from the internet and is capable of generating text, translating languages, even writing poetry, solving difficult math problems, and more. can perform the task.
However, ChatGPT is prone to “hallucinating”. The tech industry speaks for itself when AI chatbots generate false or misleading information, often with confidence.
and affidavit “I wasn’t aware of that possibility,” Mr. Schwartz said last week. [ChatGPT] Content may be false. He also said he “greatly regrets” using generative AI and will only “supplement” its use with absolute caution and validation going forward.
Schwartz claimed he never used ChatGPT prior to this lawsuit. “I deeply regret the use of generative artificial intelligence to supplement the legal investigations conducted here, and will not continue to use it unless its credibility is fully verified,” he said. Stated.
A lawyer used ChatGPT to conduct a “legal investigation” and cite many non-existent lawsuits in his submissions, and is now in great trouble with the judge 🤣 pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
The professional attorney is set to appear in court on June 8 after admitting responsibility for not verifying the authenticity of ChatGPT sources. Mr. Schwartz was asked to show why he should not be sanctioned for “using false and fraudulent notarizations.”
ChatGPT’s convincing lie
The BBC reports that Schwartz’s affidavit contained screenshots of a lawyer who confirmed his chat with ChatGPT.
Schwartz Asked When the chatbot asked, “Is Varghese a real case?” ChatGPT replied, “Yes.” [it] That’s the real case. When asked for a source, the attorney replied that the case could be found in “legal research databases such as Westlaw and his LexisNexis.”
The lawyer asked again, “Are the other cases you provided fake?” ChatGPT said no, adding that the case could also be found in other legal databases. ChatGPT said, “I apologize for the confusion earlier.”
“After double-checking, the Varghese v. China Southern Airlines Co., Ltd. case, 925 F.3d 1339 (11th Cir. 2019), does exist and can be found in legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion caused by my previous response,” the chatbot confidently replied.