Getty Images
Google’s headquarters in Mountain View, California, owned by Alphabet Inc.
Google is working urgently to address issues with its recently launched AI-powered tool for image creation, following complaints that it was excessively correcting to avoid potential biases.
Users reported that the Gemini bot from Google provided images representing various genders and ethnicities, even when such depictions were historically inaccurate.
For instance, when prompted for images of America’s founding fathers, the results included women and individuals of color.
Acknowledging the shortcomings, the company stated that the tool was not meeting expectations.
Jack Krawczyk, the senior director for Gemini Experiences, commented, “Gemini’s AI image generation produces a diverse range of people, which is generally positive given its global user base. However, it falls short in certain contexts.”
“We are actively working to enhance the accuracy of these depictions,” he emphasized.
This incident is not the first time AI technology has grappled with real-world challenges related to diversity.
In a notable past incident, Google issued an apology nearly a decade ago when its photo app mislabeled an image of a black couple as “gorillas.”
Similarly, OpenAI faced criticism for potentially reinforcing stereotypes when its Dall-E image generator predominantly displayed images of white men in response to queries for “chief executive,” among others.
With the pressure to demonstrate advancements in AI, Google recently unveiled the latest iteration of Gemini, a tool that generates images based on text prompts.
However, the release faced swift backlash, with critics accusing the company of overly emphasizing social awareness.
Computer scientist Debarghya Das expressed frustration, stating, “It’s surprisingly challenging to prompt Google Gemini to recognize the existence of white individuals.”
Author and humorist Frank J Fleming echoed similar sentiments, highlighting the difficulty in obtaining an image of a Viking through the tool.
These criticisms gained traction, particularly within right-wing circles in the United States, where major tech platforms are already confronting allegations of left-leaning bias.
Mr. Krawczyk emphasized Google’s commitment to addressing representation and biases, aiming to align the tool’s results with its diverse user base.
“We recognize the complexity of historical contexts and will fine-tune our approach to accommodate these nuances,” he shared on X, formerly known as Twitter, where users were sharing examples of the tool’s questionable outcomes.
“This process reflects our dedication to continuous improvement based on feedback. Your input is valuable, so please continue to share your thoughts!”