Written by 8:23 am AI Limits, Generative AI

### Limitations of AI Exposed: Google’s ‘Woke’ Image Generator

Google has hit pause on Gemini’s ability to generate images of people after a far-right backl…

Google has acknowledged that its Gemini AI model fell short of expectations following widespread criticism of what many perceived as “anti-white bias.” Users noted that the system was generating images of individuals from diverse ethnic backgrounds and genders, even in contexts where it was historically inaccurate. In response, Google announced a temporary halt to the generation of people images until a solution could be implemented.

For instance, when asked to create images of Vikings, Gemini exclusively displayed Black individuals dressed in traditional Viking attire. Similarly, requests for “founding fathers” yielded images of Indigenous people in colonial clothing, with one result depicting George Washington as Black. Moreover, when prompted for an image of a pope, the system only showed individuals of non-white ethnicities. In some instances, Gemini was unable to produce images of historical figures like Abraham Lincoln, Julius Caesar, and Galileo.

Critics, particularly from the right-wing, seized on this issue to argue that it exemplified a broader anti-white bias within Big Tech. Entrepreneur Mike Solana went as far as labeling Google’s AI as an “anti-white lunatic.” However, experts like Gary Marcus suggested that the problem lies more with the inadequacies of the software rather than intentional bias.

Google introduced its Gemini AI model two months ago as a competitor to OpenAI’s GPT model. A recent update, Gemini Pro 1.5, aimed to enhance the system’s capabilities in handling large volumes of audio, text, and video inputs. Despite these advancements, Gemini still produced historically inaccurate images, such as one portraying the Apollo 11 crew with a woman and a Black man.

Acknowledging the shortcomings, Google expressed its commitment to addressing these issues promptly. Jack Krawczyk, a senior director at Google’s Gemini Experiences, emphasized the company’s dedication to inclusive representation and bias mitigation. While defending the system’s ability to generate a diverse range of images, Krawczyk recognized the need for fine-tuning, especially in historical contexts.

The incident sparked debates online, with some critics accusing Google of racism or being influenced by “woke ideology.” However, experts caution against attributing human-like intelligence to AI systems like Gemini, highlighting their limitations in distinguishing between historical and contemporary requests.

As the AI industry grapples with bias mitigation, experts stress the complexity of achieving a balanced representation while considering historical contexts. Sasha Luccioni from the AI startup Hugging Face notes that bias in AI models exists on a spectrum and underscores the challenges in striking the right balance. She emphasizes that there is no one-size-fits-all solution to bias and diversity in AI models, pointing out the ongoing efforts by companies like Google to navigate these challenges.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: February 23, 2024
Close Search Window
Close