"

2 Artificial intelligence (AI) has just as many negative effects as humans in the hiring process

Bri Batteast

For the past years, Artificial intelligence (AI) has been growing and being used on a daily basis. Like in the hiring process, whether that be through the interview or scanning a resume. Before AI, the hiring process would take up a lot of time and cost the company a bunch of money. Jessica Newman from ERE Recruiting Intelligence stated that, “It takes a typical U.S employer six weeks to fill a role, which costs roughly $4,000. So the desire to reduce hiring cost and speed up the recruitment process has understandably piqued people’s curiosity about AI” (Newman, 2020). Seeing that when using humans to do the recruitment, it would cost the companies roughly $4,000. This is valuable money that can be used for something else, so of course it sparks companies interest in using AI. Companies now use Artificial intelligence to help find the right candidate for the job while also making the elimination process of unfit candidates quicker. To find the best-fitted candidates, the AI would “analyze people’s facial movements, word choice, and tone of voice in an attempt to determine their employability.” (Newman, 2020). But even though AIs have helped with the hiring process being more efficient, it’s actually not that effective. AI has the same negative effects, like biases, as humans when it comes to choosing candidates. Without human judgement there to check on those negative effects, businesses have a possibility of not picking the best candidate out of the pool of applicants

One flaw about having a human in charge of recruiting is that they can be biased and end up discriminating against people. Which is against the law and can get the company in a lot of trouble. Artificial intelligence was made hoping that it would be neutral and pick a great candidate based solely on their characteristics and skills. Well, that is not always the case. Nish Parikh, Forbes Human Resource council member, brought up the fact that AI can actually be biased based on how they are programmed. Parikh stated that, “While in theory, AI offers a more cost-effective, targeted, and efficient hiring process by helping organizations sift through volumes of resumes, in reality, it may promote biased hiring because of its reliance on unconsciously prejudiced selection patterns like language and demography. Many data experts claim that predictive AI perpetuates a status quo since it’s usually modeled on biased and inadequate data sets.” (Parikh, 2021). Since AIs’ algorithms are programmed by humans, their programmer biases can show through the AI if they are not careful with the database they used to program it. Sarah K. White, CIO senior writer, also explains why an AI would turned biases, “Because AI algorithms are typically trained on past data, bias with AI is always a concern. In data science, bias is defined as an error that arises from faulty assumptions in the learning algorithm. Train your algorithms with data that doesn’t reflect the current landscape, and you will derive erroneous results.” (White, 2021). If the database of the AI isn’t diverse (has a lack of representation or not exposure to many things) the likelihood of it turning out to be biased is high. Leading to the company not getting a diverse group of people to hire. “If a company is looking to diversify its workforce, using an AI in its hiring process may not be the best option. There are candidates out there who have atypical work experience but may still be the best fit for the position based on his or her personality, personal interests, character, and work ethic. These are factors that require human judgment. Using an AI in this sense can greatly reduce the diversity in a workforce.” (JobStreet, 2018). When it comes down to it, AI can be just as biased as humans. Which defeats the ethical purpose of using them. And unlike humans, an AI can’t recognize when they are being biased/ discriminating towards someone. So they can’t check themselves and change their behavior.

With AIs being a system, it leads to less flexibility when making decisions. For one, the AIs algorithms are too dependent on keywords. Each AI is programmed to search for keywords in applicants’ resumes or interviews that meet the requirements for their design company. For example, If a school is using an AI to help recruit teachers, they would program the AI to look for keywords that would best fit a teacher. Some keywords (KSAOs) that are looked for when recruiting a teacher are: great communication skills, adaptability, and patience in the resume or interview. The AI would choose people with those keywords to move forward in the hiring process. This can cause two negatives. One negative is that people who aren’t quality candidates can put keywords in their resumes to get hired. “AI depends very much on certain keywords to scan through their pile of candidates. This can become a loophole for candidates who are familiar with how the system in AI is programmed, where they may include certain keywords that have the potential to trick the system and camouflage them as good fits for various positions, even though they are not.” (JobStreet, 2018). The AI won’t be able to tell that the applicant doesn’t actually have those specific skills or that they just put them there to get hired. The same can be said with a regular human. Of course, the recruiter won’t be able to fully tell the applicant’s level of skills until they start working. But unlike the AI, the recruiter can still get a sense of what kind of skills an applicant has when talking to them and hearing stories about how they use their skills during the interview or cover letter. The second negative is that qualified candidates who don’t have the keywords in their resume have the possibility of not getting hired. But the company would never know if they declined a candidate that was best fitted for the job or not. They can only see if the candidate they hire truly was qualified or not.

Also with AI being less flexible and biased, multiple factors can happen during the hiring process that can mess up the AI’s accuracy. Since the AI doesn’t have a mind like a human, it’s very likely to misunderstand a situation. In ERE Recruiting Intelligence, “Warning: Do Not Use AI in Virtual Hiring”, Jessica Newman goes over how AIs can frequently misunderstood situations during a virtual interview:

Current AI technology is notoriously prone to misunderstanding meaning and intent. A big reason for this is the vast cultural and social variations in how people express themselves. Speech-recognition software may not accurately assess people with regional and non-native accents. And facial analysis systems can struggle to read some faces, such as people with darker skin or women wearing certain shades of lipstick. Technology can also limit accuracy. If an applicant’s video quality or camera angle isn’t perfect, the algorithm could make a mistake. The same potential problem applies to poor audio connections. Automated audio transcriptions are not 100% accurate yet, which can lead to the wrong keywords being picked up and incorrectly interpreted by the AI engine. (Newman, 2020).

With the AI relying only on its dataset to pick candidates, causing it to be inflexible and biased, the accuracy of the AI can be thrown off by simple mistakes or different upbringings (Upbringings can mean how people speak, dress, or how someone behaves. Something that people can’t change). If the programmer doesn’t program the AI with the consideration that there are multiple upbringings or how people use different words to mean the same thing, there could be a chance that the keywords that the applicants might believe is right are not the keywords the AIs algorithms is looking for.

Overall, when it comes to using artificial intelligence for your hiring process, it can benefit you a lot from an efficiency perspective. It saves money and time when choosing a candidate. But when it comes to the effectiveness of the hiring process, seeing if you hired the best candidate out of the pool of applicants, there is a big possibility that the AI didn’t pick the best candidate. One thing that humans have that AI is lacking is judgment. Not being able to have a judgment on a situation holds the AI back when trying to make good decisions. “The biggest drawback, of course, is the lack of human judgment. If an organization intends to diversify its workforce, then AI-based hiring may not serve the purpose. There are candidates out there with atypical work experience who could be the best fit for their individual personality, interest, character, and work ethics. AI, void of any human attributes, will miss these traits.” (Parikh, 2021). Someone might not have the certain knowledge, skills, or attributes for a job, but that doesn’t necessarily mean they are not qualified for the job. This is something that AI isn’t able to see. If companies want to keep using artificial intelligence in the future to help save money, then they are going to have to figure out a way to install human judgment in the system.

License

Group Anthology Book: AI in Business Copyright © by karakoch. All Rights Reserved.