Lawyer uses ChatGPT for case research and lands in deep trouble

A lawyer in New York has landed in hot water after he used OpenAI’s ChatGPT to generate a filing with case research that cited non-existent cases.

The judge in the matter says the incident has created an “unprecedented circumstance”.

The lawyer — Steven Schwartz — works for a firm representing a passenger suing an airline over an alleged personal injury.

The airline’s legal team discovered that six cases cited in the plaintiff’s filing appeared to be bogus, with made-up quotes and fictional internal citations.

Although the case research was presented by Peter LoDuca, the plaintiff’s lawyer, it was compiled by Schwartz, who has over 30 years of experience as an attorney.

Schwartz told the court he “greatly regrets” using the generative language model, which he claimed he had never done before, and said he did not know it could generate false content.

He committed to never using AI to “supplement” legal research without verifying the authenticity of its responses.

Screenshots attached to Schwartz’s affidavit explaining what happened show him asking the large language model AI chatbot if one of the cases is real.

After ChatGPT insisted that the case was legitimate, Schwartz asked it to provide a source for its information.

It told him the case could be found on legal reference databases like LexisNexis and Westlaw.

ChatGPT’s website includes a line below the text box with user input clearly stating that it “may produce inaccurate information about people, places, or facts”.

However, the language model tool can also be accessed through plug-ins installed on other apps — like Microsoft Word — which the lawyer could have been using.

LoDuca and Schwartz have been ordered to explain why they should not be sanctioned for the incident in the next hearing over the matter, set for 8 June.

Now read: Nvidia unveils new AI products — including tool for making better NPC dialogue

Latest news

Partner Content

Show comments


Share this article
Lawyer uses ChatGPT for case research and lands in deep trouble