A federal appeals court in New Orleans is currently deliberating on a proposal that would require lawyers to confirm that they did not draft briefs using AI tools or that any AI-generated content in their court documents was verified by humans.
In a statement released on Tuesday, the Fifth U.S. Circuit Court of Appeals introduced what seems to be the first rule among the 13 federal appeals courts aimed at regulating the usage of advanced AI tools such as OpenAI ChatGPT by legal practitioners presenting cases before the court.
Under the proposed rule, attorneys and parties representing themselves in court would need to declare that all references and legal analyses in their submissions were validated to the extent that AI technology was involved in their preparation.
The guideline highlights the consequences for attorneys who provide false information regarding their compliance with this rule, including the dismissal of their submissions and potential sanctions. Public feedback on this regulation will be accepted by the 5th Circuit until January 4.
Recognizing the potential future use of AI by lawyers and self-represented litigants, the court is seeking input on how to address such scenarios through this proposed rule, as mentioned by Lyle Cayce, the clerk of court for the 5th Circuit, in a communication.
This initiative emerges amidst ongoing discussions among judges nationwide regarding the necessity of safeguards concerning the integration of emerging technologies in legal proceedings and the reliability of AI systems like ChatGPT.
The heightened scrutiny on lawyers leveraging AI was underscored when two attorneys from New York were penalized for submitting a legal brief containing inaccuracies generated by ChatGPT earlier this year.
The 5th Circuit’s proposal follows the lead of other jurisdictions where similar regulations have been implemented to address the use of AI tools by legal professionals.
For instance, U.S. District Judge Brantley Starr of the Northwestern District of Texas was among the first to mandate that attorneys confirm the absence of AI involvement in reviewing their submissions without human oversight. Additionally, the U.S. District Court for the Eastern District of Texas announced a forthcoming requirement for attorneys utilizing AI technologies to authenticate and validate any computer-generated content starting December 1.
The court emphasized that while AI systems can be valuable aids, they should not replace critical thinking and problem-solving abilities, cautioning that outputs from such tools may not always be accurate from a technical or legal standpoint.