September 23, 2023

Volume XIII, Number 266

Advertisement
Advertisement

September 22, 2023

Subscribe to Latest Legal News and Analysis

September 21, 2023

Subscribe to Latest Legal News and Analysis

September 20, 2023

Subscribe to Latest Legal News and Analysis

Will Mandatory Generative AI Use Certifications Become the Norm in Legal Filings?

On Friday, June 2, Judge Brantley Starr of the Northern District of Texas released what appears to be the first standing order regulating use of generative AI—which has recently emerged as a powerful tool on many fronts—in court filings. Generative AI provides capabilities for ease of research, drafting, image creation, and more. But along with this new technology comes the opportunity for abuse, and the legal system is taking notice.

Judge Starr’s new order requires the following:

All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.

He is calling this a “Mandatory Certification Regarding Generative Artificial Intelligence” and he will strike any party filing that does not include the required certificate—attorneys “will be held responsible under Rule 11 for the contents of any filing that they sign.” His order alleges this restriction is necessary because generative AI is not well suited to writing legal briefs due to: (1) its propensity to “make stuff up – even quotes and citations” and (2) the chance that the artificial intelligence incorporates some type of unknown or unanticipated bias. Judge Starr observes that while attorneys have sworn to set aside personal prejudices and biases, programmers of generative AI products have sworn no such oath.

The order is timely and may be at least in part a response to litigation in the Southern District of New York, where an attorney appearing before that court filed a brief in which “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” Roberto Mata v. Avianca, Inc., 1:22-cv-01461-PKC (S.D.N.Y. May 4, 2023). The attorney is now facing potential sanctions as a result of his reliance on generative AI software for case citations, particularly unconfirmed by well-known, widely used legal sources.

While this may be the first order addressing use of generative AI by lawyers, it is very likely that others will soon follow suit.

Copyright © 2023, Hunton Andrews Kurth LLP. All Rights Reserved.National Law Review, Volume XIII, Number 157
Advertisement
Advertisement
Advertisement

About this Author

In today’s digital economy, companies face unprecedented challenges in managing privacy and cybersecurity risks associated with the collection, use and disclosure of personal information about their customers and employees. The complex framework of global legal requirements impacting the collection, use and disclosure of personal information makes it imperative that modern businesses have a sophisticated understanding of the issues if they want to effectively compete in today’s economy.

Hunton Andrews Kurth LLP’s privacy and cybersecurity practice helps companies manage data and...

212 309 1223 direct