3 AI-Powered Digitized Job Interviews & Their Impact on People of Color
Mimi McLean
The use of AI technology for the job recruitment and selection processes of companies has become increasingly more common. Companies have begun using artificial intelligence to make their hiring process quicker and more efficient. Three popular AI innovations for the use of corporate human resources departments are intelligent screening software, recruiter chatbots, and digitized interviews. The primary reasoning behind using artificial intelligence during this stage is to save the recruiters’ time, make the hiring process more efficient, and get rid of human bias towards factors such as race and gender in job interviews. While using AI can certainly save time and make the process more efficient, it has not quite reached the point of being able to completely surpass human biases. In fact, these AI systems can unconsciously learn various human biases, which could make the problem even worse than it already was. AI-powered digitized interviews have the potential to make a significant impact on the issue of racial discrimination in the recruitment process for companies. The impact of these digitized interviews on people of color is primarily negative since the AI technology cannot see past certain human biases, has unreliable voice analysis algorithms, and is not yet advanced enough to accurately assess different races.
AI-powered recruitment technology comes in many shapes and forms, but the one that gives the most decision-making power to the technology is digitized interviews. Many companies have put their initial recruitment processes in the hands of this artificial intelligence, which gives it the potential to either improve or worsen racial discrimination and biases faced during the hiring process. Today’s recruitment technology claims to, “use AI to assess candidates’ word choices, speech patterns, and facial expressions to assess his or her fit for the role and possibly even the organization and its culture” (AI for recruiting: A definitive guide for HR professionals). To some, it may seem like streamlining factors such as word choice and speech patterns would be beneficial, but it actually gives the AI the power to learn human biases. For example, there are stereotypical language differences between black and white Americans. Different slang is typically used among the two groups in modern culture. Therefore, if the AI interview software is programmed to dislike or reject candidates who use slang typically associated with the black community, it is making a negative impact on the already biased hiring process. If the AI technology rejects every candidate that uses a certain type of stereotypical slang or speech pattern that the company deems “unfit”, it is possible for even fewer people of color to make it through the first round of the recruitment process than it already would be with the bias of an actual human interviewer.
Another reason AI assessing candidate’s word choices and speech patterns could end up negatively impacting people of different races is because of accents and language barriers that people from other countries have. For example, if someone whose first language is Spanish applied for the job, they are most likely not going to be as fluent or use as advanced of word choice as someone whose first language is English. Therefore, the AI software will detect their difference in word choice and speech pattern and likely view it as something negative. This could lead to the candidate being rejected just because of their accent or speech pattern. This would result in a negative impact on the issue of racial discrimination in the hiring process because it automatically puts candidates of different ethnicities at a disadvantage.
Another aspect of AI powered digitized interviews that have the potential to impact racial biases during the hiring process are the algorithms that use voice analysis to assign personality traits to a candidate. In an MIT technology review for the AI Interview tool “MyInterview” it states,
Instead of scoring our candidate on the content of her answers, the algorithm pulled personality traits from her voice, says Clayton Donnelly, an industrial and organizational psychologist working with MyInterview. But intonation isn’t a reliable indicator of personality traits, says Fred Oswald, a professor of industrial organizational psychology at Rice University. ‘We really can’t use intonation as data for hiring,’ he says. ‘That just doesn’t seem fair or reliable or valid.’ (Wall)
Based on this information, this type of technology has the potential to negatively impact people of color during the hiring process for multiple reasons. First of all, like Professor Fred Oswald said, personality cannot be indicated solely based off of someone’s intonation and voice. Therefore, this technology could certainly be programmed to be biased towards white individuals and their “typical” patterns of voice inflection. It could also be biased against people of color by programming the technology to match certain types of voices or accents to negative or “unfit” personality traits.
Although it could be argued that these systems can be used to reduce racial bias, there is a big reason why they could be being used for the opposite. In his same article Wall claims that “many of these tools aren’t independently tested, and the companies that built them are reluctant to share details of how they work, making it difficult for either candidates or employers to know whether the algorithms are accurate or what influence they should have on hiring decisions” (Wall). This company discretion allows for the potential of biased AI hiring algorithms. Companies could be secretly programming their hiring technology to reject people of color either consciously or unconsciously through the use of voice analysis in the digitized interviews.
Lastly, video interviewing technology is very limited when it comes to detecting and evaluating the performance of people of color. In a study about the legal and ethical implications of AI recruiting software it states, “One of the most controversial characteristics that could be analyzed with video interviewing is race. It is unfair, and it must be said that most algorithms and classifiers are trained with images of White people, not performing well with Black people. As Buolamwini and Gebru pointed out, there is enormous controversy regarding the inclusion of racial discrimination in algorithms. It has been shown that there are soap dispensers that detect easier White people than others” (Fernández-Martínez). Since most AI algorithms and classifiers are trained with images of White people, it makes it more difficult for the software to accurately assess people of color. This is obviously something that could negatively impact people of color during the hiring process, as it is much more difficult for the technology to get an accurate understanding of these candidates. There needs to be a massive switch in the programming of the artificial intelligence software for there to be accurate and fair analyses of every single candidate, including people of color. The same researchers concluded that,
The conflict behind AI for recruiting is that it relies on proprietary products trained with limited data sets. Even though they offer accuracy to look at certain characteristics, they were not thought of as mainstream recruiting tools in their beginnings. As a matter of fact, they are progressively adopted by large corporations with thousands of candidates for efficiency reasons. The software could not control potentially discriminatory outcomes if recruitment is carried out by the company under the wrong reasons or controlled by non-democratic state, e.g. being selective against minorities, women, people under or over a certain age, senior citizens, immigrants or customer with accents. Image processing could even filter candidates by appearance reasons. (Fernández-Martínez)
This conclusion further emphasizes the idea that AI technology is not yet advanced enough to accurately and fairly evaluate job candidates. Therefore, the current technology is more than likely negatively impacting people of color rather than helping them, which could be either a conscious or unconscious decision by the software programmer.
Due to inaccurate and unconsciously (or potentially consciously) biased AI algorithms, digitized recruitment technology is negatively impacting people of color. Between the lack of regulation for the technology itself, the actual word choice and speech patterns that the algorithms deem good or bad, the unreliable voice analyses, and the inability for the technology to accurately analyze people of color, artificial intelligence recruiting software is hurting people of color more than helping them. AI recruiting technology needs further advancements before it can improve the issue of racial discrimination during companies’ recruitment and selection process.