The appointment of the first-ever official dedicated to artificial intelligence (AI) by the Justice Department signifies a proactive step in preparing for the transformative impact of this rapidly advancing technology on the criminal justice system.
Jonathan Mayer, a distinguished professor at Princeton University specializing in the convergence of technology and law, particularly in areas such as national security, criminal procedure, consumer privacy, network management, and online speech, has been chosen as the DOJ’s chief science and technology adviser and chief AI officer, as per Reuters.
In response to this development, U.S. Attorney General Merrick Garland emphasized the necessity for the Justice Department to remain abreast of scientific and technological advancements to effectively uphold the rule of law, ensure national security, and safeguard civil rights.
Mayer’s background includes serving as the technology adviser to Vice President Kamala Harris during her tenure as a U.S. senator and as the Chief Technologist of the Federal Communications Commission Enforcement Bureau. In his new capacity, he is expected to provide counsel to Garland and DOJ leadership on the integration of AI into the department’s investigative and prosecutorial processes, as reported by Reuters.
New Hampshire AG Traces Robocalls With ‘AI-Generated Clone’ of Biden’s Voice Back to Texas-Based Companies
Vice President Kamala Harris speaks at a press conference during the UK Artificial Intelligence Safety Summit at Bletchley Park, England, on Nov. 2, 2023. (DANIEL LEAL/AFP via Getty Images)
Mayer will lead a newly established panel comprising law enforcement and civil rights experts to advise Garland and other DOJ officials on the ethical and operational aspects of AI systems. Additionally, he will focus on recruiting more technology specialists to augment the department’s capabilities.
U.S. policymakers are grappling with the challenge of harnessing the benefits of AI while mitigating the risks associated with its unregulated proliferation. Deputy Attorney General Lisa Monaco, speaking at Oxford University, highlighted the department’s current use of AI in various capacities, such as tracking the origins of illicit substances, processing public tips received by the FBI, and analyzing substantial evidence in major cases like the January 6 incident.
Monaco underscored the dual nature of AI as a potent tool for law enforcement but also a potential catalyst for security threats. She cautioned against the amplification of biases, the dissemination of harmful content, and the empowerment of malicious actors through AI technologies.
Deputy Attorney General Lisa Monaco speaks during an event where Attorney General Merrick Garland addressed DOJ efforts to combat violent crime in many cities during the previous year on Jan. 5, 2024 in Washington, D.C. (Win McNamee/Getty Images)
Fcc Makes Ai-generated Robocalls That Can Fool Voters Illegal After Biden Voice Cloning In New Hampshire
While acknowledging AI’s positive contributions, Monaco warned of its capacity to exacerbate security vulnerabilities, particularly in the realm of election integrity. She highlighted the potential for AI to be exploited by foreign entities to manipulate social media, spread misinformation, and undermine democratic processes through deceptive practices like deepfakes and impersonations.
Monaco stressed the urgent need for robust governance and oversight to harness the benefits of AI responsibly while mitigating its adverse effects on democracy and security worldwide, particularly in the context of elections.
U.S. Attorney General Merrick Garland delivers remarks regarding ongoing efforts by the Department of Justice to combat violent crime on Jan. 5, 2024. (Win McNamee/Getty Images)
As the global landscape confronts the pervasive influence of AI across diverse sectors, Monaco urged swift action to leverage the positive potential of AI while safeguarding against its misuse and safeguarding democratic principles.