Picture of By Ludmila Cmarkova

By Ludmila Cmarkova

From healthcare to the automotive industry, retail, or even livestock management, artificial intelligence, or AI, for short, is widely used in a vast array of industries with the aim of improving time and cost efficiency, precision, and performance. Recruitment is also among the fields that found a use for AI. While the application of this digital technology in Human Resource Management (HRM) surely brings significant improvements to the recruitment processes and helps industry professionals to find the right people for the right job, there are certain ethical implications to be taken into account when introducing such technologies. 

What AI-based recruitment tools promise is a data-driven assessment of candidates in terms of their strengths, weaknesses, organizational fit and even their future potential in the company. These new technological advances are rapidly replacing traditional candidate evaluation techniques such as psychometric assessments that have been utilized by recruitment professionals all over the world for many years now. The way these psychometric assessment methods generally work is they assess candidates in terms of their intelligence or cognitive abilities and subsequently determine the probability of their success in different work roles. When it comes to such assessment methods, there is an extensive body of scientific research supporting their accuracy and reliability when choosing the right candidate for the job. 

With the emergence of novel digital technologies such as AI-powered assessment tools that are utilized in HR, the question of how accurate and reliable they are is more relevant than ever. 

While the potential of AI to help companies make cheaper, faster, and more objective decisions when recruiting new candidates sounds very promising, the current state of this technology is far from perfect. 

Bias 

The problem is that if the original data inputted in such machine learning systems was already biased in some way, the algorithm will most likely be influenced as well and reinforce the existing bias.

One of the most problematic aspects of AI tools in recruitment is the possibility of unequal opportunities among the candidates on the basis of gender, race, or other personal characteristics. AI recruitment tools work on the basis of selecting and recommending candidates based on the large amounts of data that was “fed” to them. The problem is that if the original data inputted in such machine learning systems was already biased in some way, the algorithm will most likely be influenced as well and reinforce the existing bias. 

One of the examples of an AI candidate assessment tool going wrong was the hiring algorithm that Amazon tried to develop back in 2018. Since their system was trained on assessing resumes submitted to the company over the period of the past 10 years, which mostly came from men, the algorithm was unsurprisingly found to duplicate hiring bias against women candidates. Simply put, if the company never hired a woman for a certain job position before, the algorithm that was “fed” this information is not going to consider a woman as a suitable candidate for the given job position. 

While scandals like these shed pessimistic light on AI-powered recruitment tools, the potential of this technology in HR should not be abandoned. For example, researchers at Penn State and Columbia University have developed a machine learning technology focused on identifying and preventing bias and discrimination in hiring. 

Privacy concerns 

Reinforcement of existing biases is not the only issue associated with the application of AI in HR. While it is legally forbidden to ask about and consider candidates´ personal lives, marriage status, political preferences, sexuality, or pregnancy throughout the recruitment processes in many countries today, such questions are unfortunately still being asked by recruiters. In situations like these, candidates are left with a choice to either answer honestly or refuse to reply. Either way, they are put at risk of being discriminated against based on their answers.

Yet, with the emergence of technologies able to predict many of these things indirectly without explicit consent from the candidate, the question of ethics and privacy stands strong.

As unfair as a situation when a job applicant is asked and forced to answer inappropriate, and potentially discriminatory, questions can be, the candidate is at least given the opportunity to decide whether they want to disclose such information about themselves or not. Yet, with the emergence of technologies able to predict many of these things indirectly without explicit consent from the candidate, the question of ethics and privacy stands strong. Almost ten years ago, it was already possible to predict individuals’ sexuality, religious beliefs, political preferences, race or even relationship status and substance abuse with relatively high accuracy based on just a few Facebook likes. 

Considering that such highly accurate predictions were already possible ten years ago, one can only imagine how much technology is able to tell about a person based on easily accessible digital records such as Facebook likes today. 

The opacity of AI systems 

Overall, it all comes down to the lack of transparency about the inner workings of AI recruitment tools. At the moment, lots of these systems function as so-called “black boxes” without a proper public understanding of what they base their recommendations on. Understanding why and particularly what type of candidates the AI system rejects is essential for tackling the possibility of bias reinforcement. 

Even though the promise of technology reducing the subjectivity and biased decision-making of human HR professionals sounds enticing to many, AI is not without its flaws and there is still a long way to go until it is truly able to fulfill such promises. 

 

Cover: Tara Winstead

Edited by: Gaukhar Orkashbayeva

Google Workspace Google Workspace prijzen Google Workspace migratie Google Workspace Google Workspace