Written by 11:00 am AI, Discussions, Uncategorized

### Safeguarding Black and Brown Lives in the Era of Artificial Intelligence

If Congress were to enact human-rights centered data privacy legislation, and guardrails and prohib…

Earlier this year, Porcha Woodruff, an African American expectant woman, was erroneously identified as the robbery suspect through facial recognition technology (FRT), resulting in an unjust arrest. This incident highlighted the detrimental impacts of high-risk AI systems and a series of law enforcement mishaps, including inadequate research and scientific preparation. Particularly vulnerable communities face significant risks, especially as the White House’s Executive Order on Artificial Intelligence and review mechanism regarding AI governance unfolds.

These recent advancements mark a pivotal moment in U.S. AI governance, yet they do not fully address the profound issues associated with FRT misuse and the enduring trauma it inflicts.

Inter-generational Trauma and FRT

The implications of FRT are profound and demand immediate action. Its surveillance capabilities, coupled with inherent biases, pose a grave threat of perpetuating an inescapable network of algorithmic bias, raising serious concerns about privacy and algorithmic fairness. Notoriously known for its substantial technical deficiencies, including a well-documented issue of reduced accuracy for individuals with darker skin tones.

An egregious example of the improper utilization of biased analytics in law enforcement is evident in reported cases where FRT was employed to detain innocent demonstrators exercising their rights in public spaces. This misuse undermines fundamental principles of privacy, free speech, and jeopardizes individuals’ well-being, safety, and right to due process, thus jeopardizing the fabric of our society.

Instances like those involving Porcha Woodruff and Robert Williams, where wrongful arrests occurred in front of their children, underscore a deeper issue contributing to violations of individual rights and societal unrest. These alarming incidents, reminiscent of a history of injustices, serve as a stark reminder of the imperative need for stringent regulations in the face of surveillance technologies that encroach upon people’s liberties.

Progress with Constraints

While the Biden administration’s AI Executive Order signifies a significant stride in U.S. AI governance, its approach to addressing FRT raises apprehensions. In light of a troubling Government Accountability Office (GAO) report exposing reckless use of facial recognition software by law enforcement agencies lacking training, policies, or oversight, the directive within the executive order for a report on AI in the criminal justice system appears inadequate. The GAO has consistently highlighted the absence of monitoring, education, and compliance in reports addressing the use of facial recognition by federal agencies.

The executive order endeavors to address training requirements for law enforcement employing AI systems with rights implications, yet it lacks specificity in tackling FRT, delineating redline prohibitions in specific scenarios, or providing remedies for those adversely affected. A somewhat more comprehensive approach is taken in the subsequent review scheme focusing on enhancing governance, innovation, and risk management for corporate AI use, which defines covered AI systems and acknowledges the impact of FRT. However, it falls short of prohibiting FRT in public spaces or rectifying the harm caused by law enforcement’s reliance on erroneous arrests and misidentifications.

A man types on a computer keyboard

A person is depicted using a laptop keyboard to type.
Why does the executive order merely suggest that agencies consider offering guidance to state, local, tribal, and territorial law enforcement entities when the GAO report strongly underscores the urgent necessity for extensive training and oversight? Is the White House genuinely committed to effectively addressing the pressing issue of algorithmic bias?

A Path Forward—Can Congress Take Action?

The executive order rightly prioritizes accountability and calls for congressional actions, placing the onus of making binding decisions on Congress. Congress must enact robust, clear, and targeted legislation restricting law enforcement’s use of FRT and prohibiting specific applications to effectively combat algorithmic bias. This legislative framework would safeguard personal privacy, enhance algorithmic transparency, and ensure the protection of civil and human rights.

Cases of facial recognition misidentifications serve as a wake-up call for Congress to swiftly and decisively curb FRT misuse, ensure responsible policing, and uphold the rights of all Americans, irrespective of their cultural, ethnic, or religious backgrounds. Without these measures, wrongful arrests stemming from FRT and inadequate law enforcement training will persist, perpetuating inter-generational trauma and undermining principles of fairness and justice. What kind of society will we cultivate if these policies are implemented? safer.

A future characterized by equitable and just systems is attainable if Congress enacts rights-focused data privacy legislation and imposes safeguards and restrictions on specific facial recognition technologies. By outlawing certain applications of FRT and implementing safeguards on its law enforcement use, Congress can pave the way for increased freedom and security for Black and brown communities—where technology works for, not against, us. In this vision, systems become allies rather than adversaries.

Visited 1 times, 1 visit(s) today
Last modified: February 7, 2024
Close Search Window
Close