Executives now realise that varied perspectives, experiences and personalities produce improved customer, business and community outcomes. A recent study shows that employees working in diverse and inclusive teams are 10 times more likely to be highly effective than those in non-inclusive teams, nine times more likely to innovate, and five times more likely to provide excellent customer service.
However, diversity and inclusion is not yet mainstream. There is still a gap. The same study shows that less than half of Australians currently work in diverse teams. Many organisations still struggle with the “comfortable clone syndrome” where recruiters and hiring managers tend to select individuals who share similarities with them. This unconscious human bias is negatively impacting our ability to drive diversity and inclusion in our workplaces.
Where artificial intelligence (AI) can help and how it can fail
Over the years, AI has been adopted across many industries and functions, including HR. The use of AI algorithms to automate the selection of candidate profiles can drive efficiencies and remove the unconscious human bias within the hiring process. This approach offers the potential of an improved candidate experience, business outcomes, and equal opportunity.
AI algorithms for recruitment are trained to make candidate selection decisions based on past examples of the candidate profiles, roles and hiring decisions. However, if the past examples incorporate human bias, the AI algorithms will reflect these biases and generate discriminatory decisions.
Last year, one effort to automate recruitment resulted in a tool that was soon found to be discriminating against women. This tool was driven by an analysis of resumes submitted to the company over a 10-year-period, which were predominantly male.
A careers platform faced a similar issue, displaying higher paid and more senior jobs less frequently to women because of an algorithmic bias. This occurred because the AI tool was generating recommendations based on past behaviours on LinkedIn, where users were predominantly male.
It is paramount that businesses understand this reality and make sure they create the right environment to limit the risk of bias, both in hiring, and in building AI recruitment tools.
Removing bias throughout the hiring process
These biases creep in because human biases influenced the AI recruitment algorithm. To address this issue, developers of the AI recruitment algorithms need to notice the bias and remove it.
This can be accomplished by examining, identifying and removing the bias embedded in the past recruitment hiring examples used to train the AI recruitment algorithm. These bias-free models need to be rigorously tested to verify that they are fair and support equality.
Once the AI algorithms are built without bias, they can help drive equal opportunity and enable diversity across various aspects of the hiring process. This includes:
- Automating candidate selection to remove human bias to enable a more diverse and improved set of candidates identified as suitable for the role.
- Removing bias in job descriptions to score job descriptions based on neutrality criteria and suggest ways to improve them to appeal to a diverse range of applicants.
- Onboarding and mentoring to personalise the onboarding experience and identify ideal mentors for new employees based on personality, skills and other preferences.
It is an organisation’s role to strike a balance between human and AI insight to recruit people from diverse backgrounds with different perspectives. Ensuring diversity is not only the right thing to do, but also the key to innovating and thriving in the digital economy.
Dr Susan Entwisle is Assistant Vice President, Digital Business, Cognizant