In Episode 151, Kelly Twigger discusses the first discovery decision with generative AI implications in a discovery context that arose when a party relied on what appear to be hallucinated citations in objecting to the granting of a protective order. Iovino v. Michael Stapleton Assocs., Ltd. (July 24, 2024).
Introduction
Welcome to this week’s episode of our Case of the Week series brought to you by eDiscovery Assistant in partnership with ACEDS. My name is Kelly Twigger. I am the CEO and founder at eDiscovery Assistant, your GPS for ediscovery knowledge and education, and the Principal at ESI Attorneys. Thanks so much for joining me today.
Each week on the Case of the Week, I choose a recent decision in ediscovery and talk to you about the practical implications of it. This week’s decision is a very important one, and the first discovery decision that we’ve seen that implicates generative AI. I say that because there’s really only speculation here in this case as to whether or not the plaintiff leveraged ChatGPT and relied on hallucinated citations to case law, meaning that the cases do not exist. This case is the first time we’ve seen generative AI and the use of ChatGPT to cite decisions in a discovery case. This case is included in the eDiscovery Assistant database because of the underlying issues.
The most important takeaway today that I want to talk to you about is that use of generative AI for legal research purposes. This is an issue that’s been in our world for the last couple of years, and we’ll talk about some of the other decisions that have covered it, but it is the first time we are seeing it in a discovery decision. Our Generative AI issue tag in eDiscovery Assistant that was previously just for rules has now been amended to apply to this decision as well.
All right, let’s dive into the case.
Background
This week’s decision comes to us from Iovino v. Michael Stapleton Assocs., Ltd., otherwise known as MSA. This is a decision from July 24, 2024, so just a couple of weeks ago, from United States District Judge Thomas Cullen, who is in the Western District of Virginia. Judge Cullen has 13 decisions in our eDiscovery Assistant database. As always, we add our proprietary issue tagging structure to each of the decisions in the database. The issues for this week’s decision include protective order, 30(b)(6) corporate designee, and Generative AI.
Facts
We are before the Court on the plaintiff’s objections to the Magistrate Judge’s entering of a protective order. The underlying case involves a violation of the federal whistleblower law, and the court notes that discovery in the case has been very contentious in the three years it has been going on. MSA is a federal contractor that has an agreement with the State Department to train explosive detection canines. The company employed Iovino, the plaintiff, as a veterinarian for approximately two years before firing her in August 2017.
Iovino believed that she was fired for reporting alleged issues about MSA’s contract with the State Department to that agency’s Office of the Inspector General. The current discovery dispute centers on plaintiff Iovino’s request to depose six current or former MSA employees under Rule 30(b)(6) about their work on MSA’s contract with the State Department and whether or not the State Department’s Touhy regulations apply.
The Touhy regulations are housekeeping rules that establish policies, procedures, and responsibilities for agencies to respond to requests for official information or employee testimony in legal proceedings. The Touhy regulations are named after the Supreme Court’s decision in Touhy v. Ragen, in which the Supreme Court held that an employee may not be held in contempt for failing to produce the demanded information where appropriate authorization has not been given. The Touhy regulations are about whether employees can be compelled to testify under Rule 30(b)(6) in this situation — and often these requests have to be approved and given the administrative authority before they can proceed with providing that testimony. The plaintiff here disputed that the request for the 30(b)(6) depositions was subject to the Touhy regulations, and MSA moved for and was granted a protective order by the Magistrate Judge, holding that the Touhy regulations did, in fact, apply.
Iovino filed a timely objection to the order and were in front of the District Court on the objections to that Magistrate Judge’s order.
Analysis
As to the underlying issue of the validity of the protective order, the Court here found that the Touhy regulations are clear on their face that they apply to deposition requests like the plaintiff’s and overruled the plaintiff’s objections on those grounds. The Court also overruled all of the plaintiff’s ancillary objections as not being contrary to law.
Here’s where things get a little bit dicey and implicate the generative AI. In a brief accompanying the motion, plaintiff cited to multiple decisions, using Westlaw cites that, as the Court noted, “Shockingly, her objections rely, in part, on citations to sources and quotations that appear not to exist.” Following the analysis on the objections to the protective order, the Court then turned to the need for plaintiff to show cause as to why she should not be sanctioned under Rule 11 for making “court filings for an improper purpose or with frivolous arguments, as well as for other reasons”, including “when attorneys act in bad faith and engage in deliberate misconduct in an attempt to deceive the court.”
The Court noted here that Iovino’s brief cites to multiple cases and quotations that the Court could not find when it independently reviewed Iovino’s sources. Specifically, two cases with cites did not appear to exist at all. Iovino also cited to two opinions that exist, but attributed quotations to those two opinions that do not exist in the decisions themselves. The hallucinated cases were first pointed out by MSA in its response and plaintiff did not reply or address the allegations of fabricated citations. MSA posited to the Court that the citations were the result of “ChatGPT run amok.”
Plaintiff did provide the Court with supplemental authority to support her objections after MSA raised the issue, but she did not offer any explanation as to where the seemingly manufactured citations and quotations came from or who was to blame for the error in submitting them. According to the Court, the plaintiff’s “silence is deafening.” As such, the Court ordered Iovino’s counsel to show cause why they should not be sanctioned or referred to their respective bars for professional misconduct.
This case comes to us from a sanctions perspective under Rule 11. It’s not a Rule 37 or discovery sanctions case. This touches on the question as to why is this the first time we’re including this on the Case of the Week, where we usually limit our discussion to discovery issues? Well, I decided to raise it because it’s becoming a prevalent issue. I want to be able to get the word out. I want you to share this with everyone you can in terms of the notion that ChatGPT is not a legal research tool. But this is the first time that we’ve seen it in a discovery decision that is appropriate for the eDiscovery Assistant database. And that’s why we’re covering it here this week.
Takeaways
Let’s talk about what our takeaways are.
Well, as I’ve already mentioned, Iovino is not the first time we’ve seen hallucinated citations that led to sanctionable conduct by leveraging ChatGPT as a legal research tool. We’ve seen this in several other cases across the country. And, as I just mentioned, it’s fitting to finally discuss it here because we’re seeing it in a discovery decision. Please, please share this with your colleagues, your bar associations, the students you teach, anywhere you can to prevent other lawyers from engaging in this practice.
The most widely discussed case and the first one that we saw was the Mata v. Avianca, Inc. case from the United States District Court for the Southern District of New York. In Mata, when counsel submitted nonexistent judicial opinions with fake quotes and citations created by ChatGPT, they continued to stand by the fake opinions after judicial orders called their existence into question. The Court in Mata really articulated very clearly why the fake decisions cause issues, and I want you to pay attention to this quote:
The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of other arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.
That’s a really important and compelling argument for exactly why these hallucinations create so many problems. There are problems from everyone’s perspective, and so it’s really important that we bring this issue to light for purposes of our practice.
Here, the plaintiff in Iovino didn’t offer any explanation as of yet, and the Court gave counsel 21 days to show cause why they should not be sanctioned or referred to their bar associations for misconduct. We’re going to keep an eye out for that ruling as to what happens. Currently, this case is on appeal from the United States District Court Judge’s decision regarding the protective order. But that motion to show cause is still under the District Judge’s purview, and that is still pending. So hopefully we’ll see something on that in the next couple of weeks.
There are several other instances of this issue popping up around the country that are in non-discovery-based decisions and therefore have not been included in eDiscovery Assistant. You’ll remember that Michael Cohen, who was Donald Trump’s one-time personal lawyer and fixer, admitted that he unwittingly passed along bogus artificial intelligence generated legal case citations that he got online. Why his lawyers didn’t review them before they submitted is certainly a question that you can ask. Citing hallucinated decisions have serious ethical implications for counsel. An attorney in Colorado was suspended for using artificial intelligence to generate fake case citations in a legal brief and then lying about it.
Even in my classroom with the students, I’ve been asked whether ChatGPT is appropriate to use for legal research, and the answer is most emphatically, no, it is not a legal research tool. Make sure that you validate any citations you are citing to the court. That they are real decisions, number one, and number two, that they stand for the proposition you are citing them for.
Generative AI is an incredible technological development that can help lawyers in many, many ways. We leverage Gen AI to create summaries for all of the decisions in our eDiscovery Assistant case law database, and it’s a hugely valuable tool, but it is not a substitute for legal research tools. To lean on ChatGPT or any generative tool for legal research is a bit like sticking your foot in wet concrete and hoping you won’t get stuck. You will. Don’t do it.
Conclusion
That’s our Case of the Week for this week. Thanks for joining me. We’ll be back again next week with another decision from our eDiscovery Assistant database. As always, if you have suggestions for a case to be covered on the Case of the Week, drop me a line. If you’d like to receive the Case of the Week delivered directly to your inbox via our weekly newsletter, you can sign up on our blog. If you’re interested in doing a free trial of our case law and resource database, you can sign up to get started.
Have a great week!