Just when we thought society was moving in the right direction with diversity, equity and inclusion (DEI), the use of artificial intelligence (AI) in some forums is creating new challenges.
Do you see yourself reflected in the media, or are you more affected by it? My journalism professor once told me the two are in a constant and equal battle, but that was before AI came along and hijacked the debate.
We now have to consider who we are and who AI says we are, and that’s a big problem.
In boardrooms and editorial offices worldwide, executives meet daily to determine how best to include diverse groups into the visual representation of their operations, spanning everything from corporate compliance presentations to neon-lit billboards.
Sure, there’s a commercial element to keeping up with societal norms. Recent performance metrics have shown that incorporating DEI in media and advertising has positively influenced executives to advance the diverse visual representations around us.
For example, a study conducted by Amazon Ads last year found that 73% of consumers globally believe it is important that the brands they buy from take action to promote DEI, an increase of 7 per cent on the previous year.
“We are building diversity into the fabric of who we are as a brand and what we represent to our customers. We remain steadfast on our DEI journey and have continued to scale our work through technology,” Candi Castleberry, vice president of inclusive experience and technology, Amazon, said.
According to Amazon, the top five DEI areas of most importance to global consumers are: gender equality (29%), racial equity (27%), income (20%), education (20%), and age (20%).
The ignorant infancy of AI images
As someone who has worked in finance my whole career, I know all too well about the unique challenges that women face. While there has been a lot of progress in the last few years, AI recently appeared with its very “un DEI” interpretation of what it thinks finance should look like.
The reason? Well, despite sophisticated algorithms and machine learning capabilities, many AI systems are still trained on existing biased datasets that fail to represent the full spectrum of human diversity.
Images of attractive, slim, white middle-aged men, striding in suits and fanning dollar bills, are examples of AI-generated images which have increasingly been appearing in AI-generated news. Contemplative older men dreaming of piles of dollar bills, suggestively juxtaposed with a mining pick are other common AI combinations.
Void of women or any other diverse group, AI’s narrow representation is deeply problematic, perpetuating outdated stereotypes. If any other company or verified media outlet produced or published these images they would be persecuted. But, under the hidden veil of AI it slips past the keeper.
Of course, these issues are not limited to finance. Ongoing research being conducted by the United Nations has found that if you ask a common AI generator to create a visual representation of an “engineer”, “scientist”, “mathematician”, or “IT expert”, 75 to 100% of the results will show men.
In this case, AI has failed to catch up with reality, as women currently make up between 28% and 40% of graduates in STEM fields globally, according to the UN.
The battle between AI past and DEI future
Unwinding decades of diversity efforts, AI, in its current state, often relies on biased datasets that reflect historical prejudices rather than contemporary realities. The result reinforces a homogeneous and unrealistic portrayal of success and professionalism.
And, while we may not consciously pick up on this bias ‘reflection’, it is absolutely affecting public perception, perpetuating systemic inequalities. When a narrow demographic dominates representations, it can marginalise and alienate those who do not fit into these limited categories, thereby stifling diversity and perpetuating inequity.
The biggest problem is that many can’t spot the bias in action until it’s called out. And, just like we “can’t be what we can’t see” it’s hard to see beyond what you are fed visually, from fashion to finance. This is why accountability matters, photographers are credited, and companies invest heavily in authenticity.
Utilising AI images, even transparently, has the opposite impact on authenticity; it breeds distrust. Last year, global fashion giant Levi’s announced that it would begin testing AI-generated clothing models in a bid to diversify its online shopping experience. While Levi’s said the AI models would be more “body-inclusive”, allowing customers to view what an item looks like on a wide range of body types, ages, sizes, and skin tones, the backlash was quick.
The irony of claiming to be “diverse” while robbing real people, or bodies, of work sent global ripples through the industry. Modelling agent Chelsea Bonner has spoken out about the negative impact – culturally, socially and economically shifting ‘jobs’ to offshore AI representations of who AI deems our culture to be. Chelsea Bonner, Robyn Lawley and Tracey Spicer have started a petition, and collected more than 20,000 signatures against the misuse of AI images and videos.
The technology industry is also beginning to recognise and address these biases, with some companies actively working to develop more inclusive algorithms and diverse datasets. Nevertheless, the AI image examples of man-heavy finance stories continue to be generated and circulated.
Regrettably, it is humans that taught AI to be sexist and racist with the data we have fed it, but herein lies the opportunity for change. AI is built on the past, while DEI looks to the future. Hopefully, the two will meet at some point.