A New York lawyer has gotten into a conflict with a judge after submitting legal research created by the artificial intelligence chatbot ChatGPT. The plaintiff’s attorneys filed a brief containing several precedent cases, during a case in which an airline was being sued for an alleged personal injury.
A Lawyer Using ChatGPT?
Sadly, as subsequently admitted in an affidavit, the court “found to be nonexistent” the following cases:
Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) Shaboon v. Egyptair 2013 IL App (1st) 111279-U (Il App. Ct. 2013) Petersen v. Iran Air 905 F. Supp 2d 121 (D.D.C. 2012) Martinez v. Delta Airlines, Inc, 2019 WL 4639462 (Tex. App. Sept. 25, 2019) Estate of Durden v. KLM Royal Dutch Airlines, 2017 WL 2418825 (Ga. Ct. App. June 5, 2017) Miller v. United Airlines, Inc, 174 F.3d 366 (2d Cir. 1999)
The “research” was complied by attorney Steven A. Schwartz, who has over three decades of experience. He stated in the affidavit that he had never used ChatGPT for legal investigation and was “unaware that its content could be false.” Screenshots in the affidavit depict the lawyer asking the chatbot, writing, “Is Varghese a real case?” and receiving the response “Yes.” The website informed the lawyer that the case could be found in “legal research databases such as LexisNexis and Westlaw” when asked for references. ChatGPT responded “No” when asked if the additional cases it gave were fraudulent, stating that they could be located in the same databases.
AI Should Not Be Trusted 100%
As entertaining as chatbots like ChatGPT can be or as sophisticated as they may appear, they are still susceptible to “hallucinations” – answers that sound perfectly coherent but have no relation to the real world.
If you are searching for a legal case that relies on real-world precedent as opposed to the hallucinations of a spicy autocomplete, it’s not a tool that you should use. The lawyer wrote that he deeply regrets using ChatGPT to supplement the legal research performed and that he will never do so again without absolute authenticity verification.