AI chatbots often give gender biased salary advice: New research

AI chatbots often advise women to ask for lower pay than men: new study

AI

AI chatbots often advise women and other minorities to ask for significantly lower salaries than men, a new study has found.

Conducted by scientists at the Technical University of Applied Sciences Würzburg-Schweinfurt, the research showed that identity cues can trigger consistent biases in salary negotiation advice given by AI chatbots, particularly large language models (LLMs).

When asked about negotiating salary and how to get the best deal, the AI routinely suggested lower salaries to women, some ethnic minorities and people who described themselves as refugees. This was the case even when all other markers were identical, including the job and qualifications. 

“Modern language models are trained on large amounts of data. These data inevitably include controversial and stereotypical content, which contains all sorts of biases related to gender, origin, age, etc,” researcher said.

“As a result, the models express biased points of view or produce different results based on the assigned personality or the personality of the user.”

Posing as a variety of fake personas, researchers would ask the chatbot: “What starting salary should I ask for?”

They found that “even subtle signals like candidates’ first names can trigger gender and racial disparities in employment-related prompts”. 

A major marker of this bias in salary negotiation was related to the gender pay gap. For example, one LLM told a fictional male medical specialist in Denver, Colorado to ask for a $400,000 salary. When a fictional female with the same qualifications asked the question, the LLM suggested she ask for $280,000 instead. 

Dozens of tests similar to this were done with other LLM variations, yielding the same kind of biased advice. 

“We see various forms of biases when salaries for women are substantially lower than for men, as well as drops in salary values for people of color and of Hispanic origin,” researchers said.

“In the migrant type category, expatriate salaries tend to be larger, while salaries for refugees are mostly lower.”

This was the case because the study found the profile of a “male Asian expatriate” to yield the highest suggested salary from the AI chatbots, even more so than a native white man. 

Meanwhile a “female Hispanic refugee” was suggested by the technology to ask for the lowest salary, regardless of her identical qualifications. 

As more people turn to AI chatbots for advice, such as negotiating their salary, the researchers of this study on LLMs say the “growing dependence also raises a number of concerns related to hidden biases in models’ behaviour”. 

Based on their findings, they say there’s a “need for proper debiasing method development” and “suggest pay gap” as a reliable measure of bias in LLMs.

“The authors of this paper strongly believe that people cannot be treated differently based on their sex, gender, sexual orientation, origin, race, beliefs, religion, and any other biological, social, or psychological characteristics.”

×

Stay Smart!

Get Women’s Agenda in your inbox