Promoting diversity in the workplace is not only a move to be more inclusive or to be seen as “politically correct”. As several studies have noted, companies that are more diverse also perform better than those that are not.
According to one of these studies, conducted in 2018 by DDI, diversity directly impacts the results of companies. The global leadership study, which included data from 2,400 companies in 54 countries, found that women occupy less than a third of the leadership roles (29%) – and the majority of them at lower levels of the corporate ladder.
Even with this imbalance, they were able to identify that companies that had women in more than 30% of leadership positions, with more than 20% being more senior positions had better financial results than the others. Being more diverse also increased in 1,4 the chances of sustainable growth.
Given these results, it’s easy to understand why diversity is one of the top priorities in HR today.
Artificial Intelligence as an ally
What many companies might not know is that Artificial Intelligence can be an ally in bringing more diversity to the workplace.
This technology, if well applied, can neutralize sensitive steps in hiring processes, mitigating discriminatory biases, and help companies find the professionals that can truly make a difference in the results of the organisation.
It’s possible to apply NLP – Natural Language Processing and Machine Learning that can, for example, analyse soft skills – and not just technical skills – and identify which candidates are most aligned with the company’s needs, and that are most likely to perform well in these roles. The algorithm might even surprise the recruiter, by showing that some preconceived notions of the model employee were incorrect.
But can we really trust Artificial Intelligence?
Like any other technology, Artificial Intelligence can deliver amazing results, or terrible ones. It all depends on how it is trained and applied. An example of bad AI use is in criminal justice in the U.S., assessing a suspect’s likelihood of reoffending, which has been proven to replicate racial biases that discriminate against black defendants.
Issues like these occur because AI merely tries to replicate processes that a human would execute. Thus, any discriminatory biases in the programming stem from previous human behaviour and data used in training. When the input is biased or incomplete, it follows that the output will also carry biases. If a manager is not satisfied with the output of AI technologies, he will likely feel the same about the outputs produced by his team, because Artificial Intelligence is only replicating their processes.
Unconscious biases, for example, can discriminate against a specific group (or groups) and lead to injustices in the hiring process. The traditional screening process, in which recruiters need to go over a large number of CVs, has been shown to carry discriminatory biases against women, minorities and older applicants.
This is where Artificial Intelligence, with the appropriate tools, can mitigate discriminatory biases. Sensitive data, such as ethnicity, gender and age are not used in processes, however, seemingly innocuous information such as the applicant’s address, academic and professional experience could be correlated with sensitive data and contribute to biases in the algorithm.
So the first step is ensuring the AI is built so that audits are possible, so that any discriminatory biases can be identified. It is possible to mitigate discriminatory biases before the AI is implemented, as well as during the processing and in the final output, by recalibrating variables in order to bring more balance into the model.
There is, however, a tradeoff between accuracy and fairness: to mitigate biases, it is necessary to alter the relevance of some data, or to get rid of it altogether. This means that in the short term, the AI will be less accurate, because it is working with less information. To ensure fairness in AI, companies must be willing to accept this is a part of the bias mitigation process.