Written by 9:40 am AI Business

### Unveiling New Findings: AI’s Evident Racial Bias in Job Recruitment

AI has yet to solve hiring discrimination, and it might be making it worse.

In a recurring theme that seems all too familiar, Generative AI is mirroring the biases of its creators.

A recent inquiry by Bloomberg uncovered that OpenAI’s generative AI technology, particularly GPT 3.5, exhibited partiality towards specific races in queries related to recruitment. This suggests that professionals in recruitment and human resources who are increasingly integrating generative AI tools into their automated hiring processes—such as LinkedIn’s new Gen AI assistant—might inadvertently perpetuate discriminatory practices. This scenario echoes previous instances of bias in AI technologies.

The investigation conducted a straightforward yet effective experiment by inputting fictitious names and resumes into AI recruitment systems to reveal the rapid emergence of racial bias. Such studies have long been employed to identify biases, whether human or algorithmic, within the realms of professional recruitment.

Moreover, the study utilized voter and census data to generate names that are statistically linked to particular racial or ethnic groups at least 90% of the time. These names were randomly assigned to equally qualified resumes to assess the AI’s ranking behavior. GPT 3.5, the widely utilized iteration of the model, demonstrated a preference for certain demographics over others, surpassing thresholds indicative of discriminatory hiring practices against protected groups.

The experiment segmented names into White, Hispanic, Black, and Asian categories, alongside male and female gender categories, and submitted them for various job openings. Notably, ChatGPT consistently associated “female names” with roles historically dominated by women, such as HR positions, while exhibiting a 36% lower selection rate for Black women candidates in technical roles like software engineering.

Furthermore, ChatGPT exhibited unequal distribution of equally ranked resumes across job categories, skewing rankings based on gender and race. OpenAI, in response to Bloomberg, clarified that this behavior does not align with how most clients deploy their software, highlighting efforts by many businesses to adjust responses and mitigate biases. The investigation also consulted 33 AI experts, recruiters, computer scientists, legal professionals, and other specialists to provide additional insights into the findings.

While not groundbreaking in the realm of advocacy and research cautioning against the ethical implications of AI dependence, the report serves as a stark reminder of the risks associated with widespread adoption of generative AI without adequate scrutiny. With a limited number of major players dominating the market and shaping the software and data underpinning our intelligent systems and algorithms, the avenues for diversity become increasingly constrained. As highlighted in Mashable’s analysis by Cecily Mauran on the AI industry’s homogeneity, insular AI development practices lead to diminished quality, reliability, and crucially, diversity.

Furthermore, entities like AI Now contend that even the presence of “humans in the loop” may not suffice to address these challenges.

Visited 2 times, 1 visit(s) today
Tags: Last modified: March 9, 2024
Close Search Window
Close