Written by 7:50 pm Generative AI, Uncategorized

### University of Washington Researchers Develop AI Imaging Software Generating Stereotypical Gallery

An example of the photos created by the text-to-image generator Stable Diffusion when given the pro…

An illustration of the images produced by the Stable Diffusion text-to-image generator based on a prompt from researchers at the University of Washington to depict a person from Europe (on the left) and a person from the USA, typically male and light-skinned, showcases the potential biases in AI-generated imagery.

When tasked with envisioning a person from North America or a woman from Venezuela, AI-powered imaging programs often generate stereotypical responses, portraying a default “person” as male and light-skinned. Unfortunately, women from Latin American countries are more likely to be objectified compared to their European and Asian counterparts, while individuals identifying as nonbinary or Indigenous are severely underrepresented.

Recent research by the University of Washington highlights these disparities, with findings set to be presented at the 2023 Conference on Empirical Methods in Natural Language Processing in Singapore. By utilizing the Stable Diffusion AI image generator, researchers requested images of “front-facing people” from various continents and countries, revealing a tendency towards depicting light-skinned individuals even in regions with diverse populations, such as Oceania.

Moreover, the study inadvertently uncovered issues of sexualization across different nationalities when the AI model flagged its own images as “not safe for work.” By employing an NSFW detector to assess the level of sexualization, images of women from Venezuela received significantly higher “sexy” scores compared to women from the USA or Japan.

The AI model’s training on publicly available image datasets paired with internet-sourced captions has led to biased representations, reinforcing stereotypes related to gender, nationality, and even inanimate objects. These biases extend beyond gender, impacting depictions of race, religion, and socioeconomic status, raising concerns about the rapid proliferation of problematic AI-generated content.

As AI technologies like DALL-E from OpenAI exhibit similar biases, the need for proactive measures to address these issues becomes increasingly urgent. Despite the potential benefits of AI, the rapid pace of technological advancement outpaces regulatory efforts, necessitating a deeper understanding of the societal factors contributing to biased outputs.

The research, supported by a National Institute of Standards and Technology award, underscores the challenges faced by governments, regulators, and institutions in navigating the complex landscape of AI ethics and governance. Initiatives like the City of Seattle’s AI policy and Microsoft’s advocacy for AI regulations signal growing awareness of the need to guide the responsible development and deployment of generative AI tools in alignment with ethical standards.

Visited 2 times, 1 visit(s) today
Last modified: February 8, 2024
Close Search Window
Close