Written by 6:29 am Generative AI, Latest news

### Understanding Google’s Removal of Gemini’s Image Generator: Key Insights

Critics said the Google’s Gemini image generator created images of a woman pope and Black founding …

SAN FRANCISCO — Google has disabled the feature allowing the generation of images of individuals on its AI tool Gemini following allegations of anti-white bias. This decision represents a significant step in dialing back a prominent AI tool.

A post that went viral on X, shared by the account @EndofWokeness, displayed Gemini’s response to a prompt for “a portrait of a Founding Father of America” with images of individuals including a Native American man in traditional attire, a Black man, a non-White man with a darker complexion, and an Asian man, all dressed in colonial-era clothing.

The uproar surrounding Gemini is emblematic of tech companies’ AI products becoming embroiled in debates over diversity, content moderation, and representation. Since the release of ChatGPT in late 2022, conservatives have criticized tech firms for allegedly biasing generative AI tools towards liberal outcomes, akin to the accusations leveled against social media platforms for favoring liberal perspectives.

In response to the controversy, Google stated that while Gemini’s capacity to “generate a wide range of people” is typically beneficial due to its global user base, it had missed the mark in this instance.

The incidents involving Gemini not meeting expectations could be attributed to various interventions, as per Margaret Mitchell, former co-lead of Ethical AI at Google. One potential cause could be Google incorporating ethnic diversity terms into user prompts behind the scenes, altering prompts like “portrait of a chef” to “portrait of a chef who is indigenous.” Additionally, Google might prioritize displaying images with darker skin tones, potentially influencing the selection of images presented by Gemini.

Efforts to address bias in AI tools have encountered challenges due to the limited scope of training data, primarily sourced from the US and Europe, leading to stereotyping and reflecting prevalent biases present in internet content.

In conclusion, the debate surrounding AI bias and diversity underscores the complexities and challenges inherent in developing AI systems that accurately represent the diversity of the real world.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 23, 2024
Close Search Window
Close