"

5 Recruiting Decisions Should Not Be Made by Artificial Intelligence

sukamerc

The current paradigm shift being witnessed is automation of operations through the use of technology and algorithms. In the realm of human resource management, artificial intelligence and AI are being implemented to scan resumes and conduct large scale people analytics. It allows organizations to sift through large amounts of data in a short period of time. One might presume that of course, this can reduce the bias in the recruitment pipelines since an algorithm could not possibly be biased. However, recent data presents evidence suggesting that biases of the algorithm coder are embedded into any algorithms/software they create, which in turn perpetuates the biases already in the system. The efficiency brought by technology to a system cannot be denied, but it should not come at the cost of equality. Companies should use AI-mediated interviewing technologies steadily, under strict scrutiny to develop insight on how biases may be carried out and eliminate that possibility. This can lead to a controlled transition to automating the recruitment process while reducing bias where the AI is used as a tool of assistance rather than being given the power to make decisions independently.

Using technologies such as people analytics no doubt has it’s benefits – and the direction to which society shifts towards is one that relies heavily on algorithms such as this. Algorithms have the potential to be objective and also be able to sort massive amounts of data which offers an efficiency advantage over what a group of humans could do. “Drawing on large pools of quantitative data from a variety of sources, people analytics is said to deliver only a single, bias-free representation of the truth to decision-makers (Bodie et al., 2016; Gal et al., 2017)” (Giermindl et. al, 2021). This statement however does not fully encompass the complexity of the situation at hand – when dealing with people, not only isn’t there a single “objective truth”, from the readings in class we have studied that large amounts of evidence suggest that algorithms can potentially lead to even worse outcomes for people belonging to marginalized groups. Considering this, one can conclude that algorithms/people analytics has the potential to be more effective. However, if put into use currently it will uphold biases that were programmed into it. However, if the areas where the algorithm shows bias are identified, recruiters can approach the algorithm with a different mindset where they oversee the results of the algorithm, rather than blindly accepting them. It can also allow creators of these algorithms to troubleshoot when they go back to improve upon their work. no

Another reason artificial intelligence should not receive the absolute power it currently has is cited in the paper by Giermindl et. al (2021). They briefly mentioned that one of the reasons AI programs are biased is that it examines past data structures to make decisions/predictions for the future. This won’t allow the algorithm to learn how to process applications on a case by case basis. The nuances in strength of each application in relation to the job requirement will be overlooked since the AI will look for absolutes and discard applications which may be qualified (just differently). Further evidence that suggests that there are other ways in which biases seep into the algorithms. One offered by Caliskan et. al (2017) is that even specific words have attached connotations which can bias how it is interpreted. While coding past data, the AI may associate certain education backgrounds with levels of success, creating a positive correlation between two data points for the algorithm. Though this is an error made by humans, a human can reflect on their decisions but an algorithm cannot. In this context, the algorithm becomes helpful when a recruiter can use it to identify patterns made, analyze why they were made and then act accordingly. The inability for an algorithm to reflect on its decisions should inform recruiters that AI cannot be employed to make its own decisions, rather the decisions need to be overseen by a person to reduce bias.

In addition, there are several models that can be followed, which have programmed “intentional” ways to mitigate bias. These findings are presented by Yerger et. al (2019), when they compare different recruitment softwares to analyse how companies can (and are) take measures to alleviate the amount of bias embedded in the AI. They provide examples such as the softwares Blend Door, Glassdoor, and a few others, and the basic premise of these softwares is that they essentially act as a middle man between the applicant and the reviewer. These softwares will then rid the application of any identifying information before presenting the resume/CV to the employer, which can help foster equity since the scope for bias is removed (Yerger et. al, 2019). These softwares help organize the massive amounts of data received by recruiters, but they take no decisions on behalf of the recruiter. Rather, they put measures in place to mitigate any biases on behalf of the person reviewing applications. If measures similar to these are taken and used for algorithms in place, one can ensure that there would be fewer places for the algorithm to exercise any bias it may have. In addition to this, if the results are then audited correctly, bias can be mitigated from the hiring process currently.

There are measures that can be taken, as mentioned above, to continue using algorithms to a maximum, while also auditing the results to mitigate bias. Though this system will not be as efficient as just implementing the algorithm, it is no question against the algorithm enforcing biases already. While there are already manual measures being taken to prevent any biases, it is not an efficient allocation of time. However, if the design measures, as posited by Yarger et. al are applied (and perfected) over time, biases can also be programmed out of an algorithm. “Rather than auditing systems for bias after the fact, these tools are conceived with a design justice intention of removing known sources of human bias from the hiring process” (Yerger et al, 2019). The implementation of design justice looks to edit algorithms in several different areas that were concerns of both Giermindl et. al and Caliskan et. al. Overtime creators of algorithms will learn to curb biases from algorithms, but currently, recruiters should monitor the outcomes and relay any information to creators of algorithms – the information can allow for constant updates on the technology and also mitigate bias if used correctly.

Recruitment can slowly rely on automation when the findings as presented above are taken into consideration. Users and creators of recruitment technology should realize that the algorithm is flawed when it comes to recognizing individual cases and can produce biased results, demanding scrutiny over the findings. The IAT can be used more heavily to assess machine learning until the algorithm has virtually no signs of bias. Since this is an algorithm (and not a person) we can be assured that it can be tested over and over without getting tired until we receive results that we are satisfied with. Until then, if the results produced by the algorithm are kept in check, then it should be introduced sparingly until perfected. In addition, I do not believe that the final decisions should be made at all by algorithms – and while humans should use the algorithms as aids, they should never be used to make the decisions. For instance, Yarger et. al write, “To mitigate racial and gender bias, researchers have constructed preprocessing methods to maintain the accuracy of the data set. These methods include assigning more weight to underrepresented populations within the data set and duplicating data points in order to make up for under-representation” (2019), which shows that it’s humans who have the ability to recognise their faults and then use the algorithm to the advantage to take care of those faults.

The shift that we see towards automation confirms that every industry will have to eventually change the structure to accommodate the technology that will then help keep systems in place. Unfortunately initially the world will see the flaws of algorithms, especially considering it is such a new technology. Rather than shying away from algorithms, it is much more productive to embrace the direction in which the world is heading, and try to improve the output. However, the priority of the technology should not be efficiency, and rather, it should be to break down the systemic biases. This can be the opportunity taken by recruiters to bridge the gap of inequality among so many people, rather than succumb to its current path. While technology has the potential to cause destruction, it can also be used to correct human flaws and be used to help marginalised groups of people.

 

Works Cited

Caliskan, A. (. 1. )., et al. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases.” Science, vol. 356, no. 6334, pp. 183–186. EBSCOhost, doi:10.1126/science.aal4230. Accessed 8 Nov. 2021

Giermindl, Lisa Marie, et al. “The Dark Sides of People Analytics: Reviewing the Perils for Organisations and Employees.” European Journal of Information Systems, June 2021, pp. 1–26. EBSCOhost, doi:10.1080/0960085x.2021.1927213

Lynette Yarger, et al. “Algorithmic Equity in the Hiring of Underrepresented IT Job Candidates.” Online Information Review, vol. 44, no. 2, Dec. 2019, pp. 383–395. EBSCOhost, doi:10.1108/OIR-10-2018-0334.

License

AI + Privacy F21 LAMP-M301 Copyright © by Vivian Halloran. All Rights Reserved.