Joshua Zitser
Google has announced plans to address concerns regarding Gemini, its response to OpenAI’s GPT-4, following feedback that the multi-modal AI model’s image-generating capability was deemed “woke.”
The company stated on Thursday, as reported by Business Insider, that it will temporarily halt Gemini’s generation of AI images featuring people as it works on implementing necessary adjustments.
Criticism arose on social media platforms regarding Gemini generating images of individuals of color in historically inaccurate settings.
BBC News was among the initial sources to report on this issue.
For instance, software engineer Mike Wacker shared on X an instance where a request to produce images of the Founding Fathers resulted in an image of a woman of color and subsequently a man of color donning a turban.
On X, others expressed discontent with Gemini’s perceived “woke” behavior, noting instances where prompts consistently led to outputs incorporating the term “diverse.”
Debarghya Das, a former Google employee and computer scientist, mentioned on X the challenge of prompting Gemini to recognize the existence of white individuals.
In response to the feedback, a Google spokesperson conveyed via email to Business Insider the company’s commitment to enhancing the accuracy of image depictions, particularly in historical contexts.
Acknowledging the importance of representing a diverse demographic due to Gemini’s global user base, Google recognized the need for immediate improvements to address the current inaccuracies.
Google’s statement on X acknowledged the discrepancies in historical image generation by Gemini.
Further communication from Google via email to BI confirmed the temporary suspension of Gemini’s image generation feature while enhancements are being implemented, with a promise to introduce an updated version in the near future.