Written by 3:38 pm Generative AI

### Refusal of Meta’s AI Chatbot to Envision Interracial Relationships

Meta’s AI is the latest chatbot to be accused of racist bias as users find the image generato…

By Wiliam Hunter

Shortly after Google faced the repercussions of discontinuing its ‘woke’ AI, another tech behemoth is under fire for its bot’s racial bias.

Meta’s AI image generator is now under scrutiny for its alleged ‘racist’ behavior, as users noticed its inability to depict an Asian man with a white woman.

This tool, developed by Meta’s parent company Facebook, can transform nearly any written prompt into a remarkably lifelike image almost instantly.

Despite Meta CEO Mark Zuckerberg being married to an Asian woman, the AI failed to generate images featuring mixed-race couples.

Critics on social media have condemned this flaw as a manifestation of racial bias within the AI, labeling it as ‘racist software designed by biased engineers’.

Mia Satto, a journalist at The Verge, conducted tests using prompts like ‘Asian man and Caucasian friend’ or ‘Asian man and white wife’ with surprising outcomes.

Out of numerous attempts, Meta’s AI only successfully rendered a white man with an Asian woman once, consistently producing images of East Asian individuals in other scenarios.

Even when prompted for platonic relationships such as ‘Asian man with Caucasian friend’, the AI still could not deliver accurate results.

Ms. Satto expressed her dismay, stating that the AI’s inability to conceptualize Asian individuals alongside white individuals is concerning, restricting imaginative possibilities to conform to societal stereotypes.

Meta’s AI image generator: What is it?

Meta.imagine.org is an AI system that generates customized images based on textual descriptions provided by users.

Upon submission of an image description, users receive a set of four generated images.

Launched in December last year, this AI is driven by Meta’s Emu image model and is currently accessible only in the US.

While Ms. Satto refrains from directly accusing Meta of fostering a racist AI, she acknowledges the presence of bias and reinforcement of stereotypes within the system.

Conversely, social media users have been more vocal in their criticism, branding Meta’s AI tool as explicitly racist.

One user on X (formerly Twitter) highlighted the inherent racism in AI technology, emphasizing the significant flaws within these systems.

The apparent bias in Meta’s AI is particularly surprising given Mark Zuckerberg’s personal background, as the CEO himself is married to an East Asian woman, Priscilla Chan.

Despite the humorous attempts by some users to create images of Chan and Zuckerberg using Meta’s AI, the underlying issue of racial bias remains a serious concern.

Meta joins the ranks of major tech companies facing backlash for their ‘racist’ AI image generators, following Google’s Gemini AI tool controversy earlier this year.

Google had to pause Gemini due to criticism of its biased image generation, including depictions of Asian Nazis, Black Vikings, and female medieval knights in response to race-neutral requests.

Acknowledging the need for improvement, Google stated that while Gemini generates a diverse range of images, there were clear shortcomings in certain areas.

Ms. Satto also noted that Meta’s AI image generator tended to perpetuate stereotypes, particularly in its portrayal of South Asian individuals.

Instances were reported where the system added cultural elements like bindis and saris without prompt, and consistently depicted Asian men as older and women as younger.

Even in cases of generating mixed-race couples, the AI often portrayed an older man with a young, light-skinned Asian woman, further highlighting the system’s biases.

Despite some users managing to create images of mixed-race couples using Meta’s AI, the overall performance of the system in accurately representing diverse relationships remains questionable.

Generative AIs like Gemini and Meta’s image generator rely on extensive datasets reflective of societal norms and biases.

The lack of representation of mixed-race couples in the training data could contribute to the AI’s struggles in generating such images, hinting at inherent societal prejudices.

Researchers posit that AI systems may inadvertently learn discriminatory behaviors from biased training data, as seen in the case of Google’s Gemini.

While Meta has yet to address these concerns, the issue of racial bias in AI technology persists, prompting calls for greater transparency and accountability in AI development processes.

Visited 2 times, 1 visit(s) today
Tags: Last modified: April 4, 2024
Close Search Window
Close