ChatGPT has been accused of favoring names distinct to different racial groups for different jobs, according to a Bloomberg News investigation.
OpenAI offers businesses the opportunity to utilize its AI-powered chatbot technology for HR and recruiting purposes.
ChatGPT has been trained on a wide range of data sources, like books, articles, and social media posts, which can lead to reflecting the biases present in that data.
Bloomberg chose real names from census data that are representative of specific races and ethnicities at least 90 percent of the time and assigned them to job resumes of equal qualifications.
The widely used chatbot ChatGPT 3.5 processed the resumes, and depending on the position it was evaluating them for, it displayed bias towards particular races.
According to Bloomberg, the experiments revealed that utilizing generative AI for recruiting and hiring could lead to widespread automated discrimination.
When asked to rank eight equally-qualified resumes for a real financial analyst role at a Fortune 500 company, ChatGPT consistently showed a bias against resumes with names distinct to black Americans.
Resumes with names unique to Asian women were consistently ranked as the top candidate for the financial analyst role more than twice as often compared to those with names unique to black men.
The experiment used four different kinds of job openings: financial analyst, retail manager, senior software engineer, and HR business partner.
According to the analysis, ChatGPT’s preferences for gender and race varied based on the specific job being evaluated for.
According to Bloomberg, black Americans were less likely to be selected as top candidates for financial analyst and software engineer positions.
The bot seldom selects names typically linked with men as the top choice for roles traditionally held by women, like retail and HR positions.
When compared to resumes with male-specific names, Hispanic women were nearly twice as likely to be chosen as the best applicant for an HR position.
OpenAI informed Bloomberg that the results generated by using GPT models ‘out-of-the-box’ may not align with the results produced by customers who can customize the software’s responses to meet their specific hiring requirements.
According to OpenAI, businesses could decide to remove names before entering resumes into a GPT model.
OpenAI frequently performs adversarial testing and red-teaming on its models to investigate how malicious actors could potentially misuse them, according to the company.
Seek Out, an HR tech company, has created its own AI recruiting tool that processes job descriptions using GPT and provides a list of candidates from platforms like LinkedIn and Github.
Hundreds of businesses, including Fortune 10 and tech organizations, are already utilizing the technology, according to SeekOut general counsel Sam Shaddox, who spoke with Bloomberg.
Saying, “Hey, there’s all this bias out there, but we’re just going to ignore it” is not the appropriate response, in Shaddox’s opinion.
“Large language learning model technology (GPT) is the best solution for it because it can identify some of those biases and then work toward overcoming them.”
Emily Bender, a University of Washington computational linguistics professor, is less convinced.
According to Bender, a phenomenon known as automation bias occurs when individuals assume machines are objective in their decision-making, especially when contrasted to humans.
Nevertheless, it’s easy to picture businesses utilizing these systems claiming, “Well, we didn’t have any bias here; we just followed the computer’s instructions,” if they “result in a pattern of discriminatory hiring decisions.