Written by 2:42 pm AI, Latest news

### Artificial Intelligence Begins Fabricating False Legal Scenarios, Infiltrating Actual Court Cases

We’ve seen deepfake, explicit images of celebrities, created by artificial intelligence (AI).…

It is not surprising that artificial intelligence (AI) wields significant influence within our legal systems. The realm of AI has birthed deepfake explicit images of celebrities, autonomously generated music, race cars, and the dissemination of propaganda, among other innovations.

The fact that AI exerts a substantial impact on our legal systems is hardly astonishing. Judges are tasked with resolving issues based on the legal framework presented by attorneys in court. The emergence of counterfeit laws fabricated by AI in legal disputes is a cause for serious concern.

This situation raises not only ethical and legal dilemmas but also jeopardizes the trust and confidence in the international legal infrastructure.

The Genesis of False Regulations

Undoubtedly, conceptual AI possesses the capacity to reshape society, including various facets of the legal domain. However, its utilization comes with responsibilities and risks.

Lawyers are trained to apply their specialized expertise and experience judiciously, usually exhibiting low risk aversion. Nonetheless, some inexperienced practitioners (including self-represented defendants) have been ensnared by the intricacies of artificial intelligence.

AI models are trained using extensive datasets and can generate new content, both textual and visual, when prompted by individuals.

While the content produced through this method may appear compelling, it can also be deceptive. In situations where the training data is inadequate or flawed, the AI model attempts to “fill in the gaps,” a phenomenon known as “hallucination.”

In certain scenarios, conceptual AI hallucination may not pose a problem and can be viewed as a display of creativity. However, if AI fabricates or generates false information that is subsequently utilized in legal proceedings, especially affecting individuals with limited access to legal services, it becomes a significant issue.

This confluence has the potential to introduce errors and shortcuts in legitimate research and document preparation, leading to adverse societal implications for the legal profession and eroding public trust in the justice system.

Current Instances

The 2023 US case of Mata v. Avianca is a prominent example where practitioners submitted a brief containing fabricated excerpts and event references to a New York court, utilizing ChatGPT for research.

The attorneys, cognizant of ChatGPT’s generative capabilities, neglected to verify the authenticity of the referenced cases. The repercussions were severe. Upon discovery, the judge dismissed their client’s case, penalized the attorneys for acting in bad faith, imposed fines on them and their firm, and exposed their actions to public scrutiny.

Despite the negative repercussions, instances of fabricated cases persist. Michael Cohen, former attorney to Donald Trump, provided his personal lawyer with cases generated by Google Bard, another conceptual AI chatbot. While he expected his lawyer to fact-check the cases, they were accepted as true (despite being false) and included in a submission to the US Federal Court.

Instances of artificial cases have also surfaced in new legal matters in Canada and the UK.

To prevent the reckless use of generative AI from undermining public trust in the legal system, it is imperative to address this trend. Continued reliance on these tools without adequate precautions can mislead the courts, harm clients’ interests, and undermine the foundation of the rule of law.

Remedial Measures

Legal authorities worldwide have responded to this issue through various initiatives.

Several US state bars and courts have issued guidance, opinions, or rulings on the use of generative AI, ranging from responsible implementation to outright bans.

Guidelines have been developed by law societies in the UK and British Columbia, as well as the courts of New Zealand.

In Australia, the NSW Bar Association has released a generative AI guide for barristers. The Law Society of NSW and the Law Institute of Victoria have published articles on responsible usage in alignment with solicitors’ conduct guidelines.

While some legal professionals and judges, along with the general public, are well-versed in generative AI and its implications, others may lack awareness. Clear guidance undoubtedly plays a crucial role in addressing this issue.

However, a mandatory approach is essential. Lawyers must not rely solely on generative AI tools but exercise their own judgment and due diligence, ensuring the accuracy and reliability of the information provided.

Australian courts should introduce practice notes or guidelines outlining the expected use of generative AI in litigation. These rules can also serve as a reference for self-represented litigants, demonstrating the courts’ awareness of the issue and their proactive steps to mitigate it.

To promote the ethical use of AI among lawyers, formal guidelines could be adopted by the legal profession. Technology competence should be a mandatory component of lawyers’ continuing legal education in Australia.

Establishing clear standards for the responsible and ethical use of generative AI by lawyers in Australia will foster appropriate adoption and bolster public trust in the legal profession, the courts, and the country’s legal system at large.


( Authors: Michael Legg, Professor of Law, UNSW Sydney, and Vicki McNamara, Senior Research Associate, Centre for the Future of the Legal Profession, UNSW Sydney)

( Disclosure Statement: Vicki McNamara is affiliated with the Law Society of NSW (as a member). Michael Legg has no known affiliations beyond their academic appointment and has not worked for, consulted, owns, or received funding from any business or organization that would benefit from this article.


This article was republished from The Conversation under a Creative Commons license. Read the original article. This story has not been edited by NDTV staff and is published from a syndicated feed, aside from the headline.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: March 16, 2024
Close Search Window
Close