Written by 11:31 am AI Business, AI Security, AI Services

### Safeguarding Black Individuals from Artificial Intelligence: Whose Responsibility Is It?

It should be unlawful to sell algorithms that ignore, demean, and/or endanger people based on the c…

In response to a surge in criminal activity, the governor of New York has deployed approximately 1,000 National Guard members to patrol the subways of New York City. Conversely, Google has swiftly addressed issues with its text-to-image tool Gemini by transitioning it to an online platform. Despite the lack of significant efforts from both government officials and tech corporations to address the challenges faced by Black individuals in the realm of artificial intelligence (AI), the discrimination against Black people by AI algorithms like Google’s Gemini persists.

Artificial intelligence, or AI, that disseminates misinformation, biases, and outdated data poses a significant concern. Various studies have explored how AI algorithms contribute to higher incarceration rates for non-violent Black offenders compared to their aggressive white counterparts. For instance, Safiya Noble’s research in 2018 revealed that searches for “Black girls” on Google yielded disproportionately high amounts of sexual and offensive content. Additionally, AI-driven healthcare systems have been found to recommend higher levels of medical intervention for Black patients compared to others. When interacting with Midjourney, a physical AI platform akin to Gemini, and requesting visualizations of individuals with darker skin tones from different perspectives, the algorithm consistently lightens the complexion with each subsequent pose, ultimately presenting an image with a lighter complexion.

In the realm of technology, there is a persistent trend of excluding or misrepresenting Black individuals, with tech giants continuing to perpetuate such biases. These companies have a track record of responding defensively to accusations of biased analytics and stereotypes. While occasional apologies are issued, such as Google’s response to algorithms erroneously labeling Black individuals as gorillas or Facebook’s mislabeling of Black men as “primates,” the underlying issues persist. In contrast, controversies surrounding Google’s Gemini depicting historical figures in inaccurate ways do not result in significant repercussions. The focus often shifts to the discomfort caused by new AI algorithms challenging existing perceptions rather than addressing the harm caused to marginalized communities.

Addressing misrepresentations and exclusions in popular culture and historical narratives is crucial, as Black individuals have long faced harassment, unfair treatment, and demands for additional documentation by Big Tech companies. Reports have highlighted instances where algorithms exhibit racial biases, such as Facebook’s detection algorithms allowing hateful content from white men while flagging posts from Black children as hate speech.

In her book “More Than A Glitch: Confronting Race, Gender, and Ability Bias in Tech,” New York University associate professor Meredith Broussard emphasizes that the primary victims of misrepresentations and exclusions in the tech industry are not white individuals. Despite right-wing criticisms attributing biases in Gemini’s outputs to racism, the response from Google was swift, issuing a public apology and restricting the tool’s usage within a week. The question arises as to why tech companies are not as proactive in addressing algorithms that perpetuate racial stereotypes, unfairly target Black individuals, or struggle to recognize them accurately.

It is imperative to prohibit the sale of algorithms that perpetuate discrimination or pose risks based on skin color. Elected officials have the authority to intervene and demand action from tech companies to rectify or discontinue algorithms that have been proven to harm entire communities. Legal measures should be utilized to prompt timely interventions whenever technology threatens or unjustly excludes individuals. Dr. Nakeema Stefflbauer, the CEO of the nonprofit FrauenLoop, advocates for focusing on marginalized populations within digital ecosystems, emphasizing the importance of addressing these issues in her work with The OpEd Project and The MacArthur Foundation as a Public Voices Fellow on Technology in the Public Interest.

Visited 4 times, 1 visit(s) today
Tags: , , Last modified: April 9, 2024
Close Search Window
Close