Written by 9:29 am AI, Discussions

– Criticism of Google’s AI Tool for Displaying Images of People of Color

America’s founding fathers portrayed as Black women and Ancient Greek warriors as Asian women and men – this was the alternate reality envisioned by Google’s generative AI tool, Gemini, in late February.

The introduction of the new image generation feature caused a stir on social media platforms, sparking both curiosity and bewilderment. As users input prompts to create AI-generated images of individuals, Gemini predominantly displayed results featuring people of color, regardless of appropriateness.

Some users found humor in their repeated attempts to generate images of white individuals on Gemini, only to be unsuccessful. While certain instances were deemed amusing online, others, such as images depicting individuals of color donning World War II Nazi uniforms with swastikas, incited outrage, leading Google to temporarily deactivate the tool.

Here is an in-depth look into Google Gemini and the recent controversy surrounding it.

What is Google Gemini?

Google’s initial foray into the AI landscape was marked by a chatbot named Bard.

Bard was unveiled as a conversational AI program or “chatbot” capable of engaging in dialogue with users. Google CEO Sundar Pichai introduced Bard on February 6, 2023, and it was released for public use on March 21, 2023.

Renowned for its ability to generate essays or even code in response to written prompts, Bard was hailed as a “generative AI” tool.

Google announced the launch of Gemini as Bard’s successor, offering both a free and premium version accessible through its website and mobile application. Gemini was designed to process various forms of input and output, encompassing text, images, and videos.

While the image generation feature of Gemini garnered significant attention, it also sparked controversy.

What kinds of images did Gemini produce?

The most contentious images generated by Gemini depicted women and individuals of color in historical scenarios or roles traditionally occupied by white men. For instance, one rendering portrayed a pope appearing to be a Black woman.

While history suggests the existence of potentially three Black popes in the annals of the Catholic Church, with the last Black pope serving until 496 AD, there is no official record of a female pope in Vatican history. However, medieval lore alludes to Pope Joan, a young woman who allegedly disguised herself and reigned as pope in the ninth century.

How does Gemini operate?

Gemini functions as a generative AI system that amalgamates the underlying models of Bard – such as LaMDA for conversational and intuitive AI and Imagen for text-to-image capabilities – as elucidated by Margaret Mitchell, chief ethics scientist at the AI startup, Hugging Face.

Generative AI tools leverage “training data” to respond to user queries and prompts.

According to a blog co-authored by Pichai and Demis Hassabis, CEO and co-founder of Google DeepMind, Gemini operates across various mediums simultaneously, encompassing text, images, audio, and more.

Mitchell elaborated that the tool processes text prompts to generate probable responses based on statistical likelihood derived from the training data.

Does generative AI exhibit bias issues?

Generative AI models have faced criticism for perceived biases in their algorithms, particularly in terms of overlooking individuals of color or perpetuating stereotypes during result generation.

Ayo Tometi, co-creator of the anti-racist movement Black Lives Matter, contends that AI, like all technology, risks amplifying existing societal prejudices.

Artist Stephanie Dinkins, who has explored AI’s capacity to authentically depict Black women, observed that AI often distorts facial features and hair textures when tasked with generating images. Similarly, other artists experimenting with platforms like Stability AI, Midjourney, or DALL-E to create images of Black women have encountered similar challenges.

Critics argue that generative AI models tend to sexualize images of Black and Asian women. Some individuals from these communities have reported instances where AI lightened their skin tone when generating images.

These issues arise when the training data input lacks diversity, leading the AI to replicate biased patterns and generate skewed images.

Is this why Gemini produced inappropriate images?

Contrary to expectations, Gemini was designed to counteract these biases.

While previous generative AI models tended to prioritize generating images of light-skinned men, Gemini notably produced images of people of color, particularly women, even when deemed inappropriate.

AI can be programmed to augment user prompts with additional terms, such as adding descriptors for marginalized communities randomly.

Furthermore, AI models can be directed to generate a larger image set than presented to users, with images ranked based on criteria like skin tone. This approach prioritizes darker skin tones, ensuring users see the top-ranked images.

Google likely employed these strategies to mitigate historical biases, opting for a more idealistic approach in Gemini to avoid public backlash.

How did the public react to the Gemini images?

Gemini’s depictions incited an anti-woke backlash from conservative circles online, accusing the tool of promoting a “woke agenda” by portraying America’s Founding Fathers as individuals from ethnic minority groups.

The term “woke,” originating from African American vernacular, has been repurposed by some American conservatives to push back against social justice movements. This sentiment has led to restrictions on race-related content in education, exemplified by Florida Governor Ron DeSantis blocking diversity programs in state colleges in February 2023.

Entrepreneur Elon Musk criticized Gemini, reposting a screenshot where the chatbot addressed white privilege, branding the chatbot as racist and sexist.

On the flip side, Google faced criticism from minority ethnic groups for generating images like Black individuals attired in Nazi uniforms.

How did Google respond to the backlash?

Google acknowledged that the images produced by Gemini stemmed from efforts to eliminate biases perpetuating stereotypes and discriminatory attitudes.

Prabhakar Raghavan from Google elaborated that Gemini, calibrated to showcase diversity, had faltered in scenarios where such representation was inappropriate. He noted that the tool erred by being overly cautious and misinterpreting benign prompts as sensitive, resulting in erroneous and embarrassing images.

What other missteps did Gemini make?

Apart from generating controversial images, Gemini also faltered in producing accurate depictions of significant events like the Tiananmen Square massacre and the Hong Kong pro-democracy protests.

Users noted that Gemini failed to translate sensitive Chinese phrases into English, as deemed by Beijing, including phrases like “Liberate Hong Kong, Revolution Of Our Times” and “China is an authoritarian state.”

Has Google taken action regarding Gemini?

While Google has not suspended Gemini entirely, the company announced a temporary halt to the tool’s image generation function on February 22.

Google’s CEO Sundar Pichai penned a letter to news outlet Semafor, acknowledging the tool’s offensive responses and biases. He assured users that the team is addressing these issues, emphasizing the continuous improvement of AI technologies.

How has the controversy impacted Google?

The controversy surrounding Gemini led to a substantial decline in Google’s parent company, Alphabet’s market value, resulting in a loss of approximately $96.9 billion as of late February.

Alphabet’s shares experienced a notable decrease, reflecting the repercussions of the Gemini controversy on the company’s financial standing.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: March 9, 2024
Close Search Window
Close