Written by 2:31 pm Generative AI

– Google Halts AI-Generated People Images Following Ethnicity Concerns

Company says it will adjust its Gemini model after criticism of ethnically diverse Vikings and seco…

Google has implemented a temporary block on its latest artificial intelligence model from generating images of individuals following an incident where it depicted German World War II soldiers and Vikings with diverse ethnicities. The company announced that it would halt the image generation feature of its Gemini model after users shared examples of historically significant figures like popes and US founding fathers portrayed in various ethnicities and genders.

In response to the concerns raised, Google stated that they are actively working to resolve the issues with Gemini’s image generation capability. During this process, they will suspend the generation of images featuring people and plan to release an enhanced version soon.

Although Google did not specify particular images in their statement, instances of Gemini’s image outputs were widely circulated on X, sparking discussions on the accuracy and bias challenges faced by AI technology. A former Google employee highlighted the difficulty in ensuring diverse representation, noting the struggle to acknowledge the existence of white individuals.

Jack Krawczyk, a senior director at Google’s Gemini team, acknowledged the need for adjustments to the model’s image generation function. He emphasized the importance of immediate improvements to ensure a more accurate depiction of individuals worldwide. While Gemini’s AI can generate a wide spectrum of people, Krawczyk admitted that refinements were necessary to align with historical contexts.

Google’s commitment to reflecting its diverse user base through AI tools was reiterated, with a focus on addressing biases and enhancing inclusivity. The company pledged to refine responses to historical queries and nuanced requests, recognizing the complexity involved in such scenarios.

Instances of bias in AI technology, particularly affecting people of color, have been documented in various studies. Reports have highlighted disparities in image generation, showcasing prejudices against certain demographics. Efforts to mitigate bias in deep learning and generative AI remain challenging, with ongoing research exploring solutions such as curated datasets and model safeguards.

Andrew Rogoyski from the Institute for People-Centred AI at the University of Surrey emphasized the complexity of addressing bias in AI systems. While errors may persist, advancements in AI and large language models are expected to drive improvements in bias mitigation over time.

Visited 2 times, 1 visit(s) today
Tags: Last modified: February 22, 2024
Close Search Window
Close