"

4 Impacts on Workforce Diversity as a Result of AI Recruiting Algorithms

Dillion Dayhoff

Multiple studies and surveys have shown that workforce diversity positively influences innovation, company earnings, and stock value (Eastwood 2020). Further, surveys have shown that young workers are more inclined to consider company diversity an important factor in their job search. From this, it can be concluded that businesses which do not treat employee diversity as a top priority are less likely to attract the strongest and most talented young candidates. As a direct result, today’s Human Resource departments are searching for innovative ways to identify promising and diverse applicants and vet out applicants who lack the ‘unique’ qualities found in top candidates. Businesses have relied upon technological solutions to identify qualities in applicants which may be imperceptible from human judgement, such as goodness of fit and probability of sustained employment.

These algorithms, while promising in their ability to identify certain traits which humans cannot, have spawned public concerns related to their decision-making transparency and intrinsic ability to discourage discriminatory biases. Recruiting algorithm vendors are not subject to public audit or, prior to 2020, held to any legal regulations; with this level of technological anonymity, it is impossible for rejected applicants to know how an algorithm concluded to reject them or for businesses to fully understand how an algorithm works for them, aside from the historical performance and hiring data the algorithm is fed. The core concern regarding the general lack of knowledge about how AI recruiting algorithms function is that discriminatory bias is still inherently existent; because algorithms are retrospective, solely relying upon historical applicant data and data from a business’ top performing employees to form predictive assessments, it is unclear how an algorithm can properly judge diversity and its benefits without the capacity to assess an applicant solely on his or her own qualities. Companies which claim to prioritize diversity and choose to employ AI recruiting algorithms must understand the inherent limitations and disadvantages of the current technology and compensate for the blind spots created by those technological limitations with human judgement.

AI algorithms, though generally considered fundamentally neutral and void of human emotion, are products of human design, and thus reflect the data inputs and desired outputs of humans. If a company employs an AI recruiting algorithm to identify applicants who would be deemed a good fit for it, but the algorithm only has access to performance data from the company’s best employees, the algorithm is most likely to identify candidates who best resemble the qualities found in the limited pool of current employees, which as a result of historical human hiring bias most likely do not represent a truly diverse set of data examples. Miranda Bogen, a journalist for the Harvard Business Review, wrote— “if the underlying performance data is polluted by lingering effects of sexism, racism, or other forms of structural bias, de-biasing a hiring algorithm built from that data is merely a band-aid on a festering wound” (Bogen 2019). This quote illuminates how AI recruiting algorithms can be falsely perceived as tools of neutrality and fairness; with a backwards-looking algorithm, AI recruiting decisions can only hone in on promising applicants who match up to a business’ historical data, “rather than considering the possibility that this may lead it to pass over qualified applicants from non-traditional backgrounds who are under-represented in its historical data” (Eastwood 2020). Herein lies the necessity for supplementary human judgement in recruiting decisions; if an algorithm is incapable of considering new applicant data (such as race or level of education) as equal to historical data within a given company, human input is required to circumvent an otherwise qualified applicant through the algorithm, as humans are the only ones capable of recognizing this shortcoming.

AI recruiting algorithms continue to play a role in the hiring process beyond just the interview stage, inciting further concerns about hiring bias along the way. After an employer decides on an applicant to hire, “other predictive tools seek to help the employer make an offer that the candidate is likely to accept” (Bogen 2021). Like the interview process, the algorithm can only make decisions based on the data it is fed; because an applicant’s prior salaries are pertinent to an algorithmic decision about what pay they should receive at a new firm, the AI tool might demand that the applicant reveal their past salary figures (Bogen 2018). This data dependency could allow employers to directly ask an applicant about how much they made at their previous job, which undermines legislation put in place to prevent against the potential for this exact discriminatory bias. This bias has contributed to “longstanding patterns of pay disparity” (Bogen 2018), and because of the importunate data dependency exhibited by AI recruiting algorithms today, could persist indefinitely until the technology is enhanced to the point where it no longer depends so heavily upon the personal information of applicants. The fact that there are already laws set in place to prevent against bias and intrusive recruiters acquiring private information from applicants (Bogen 2018) illuminates a glaring fallacy in AI recruiting tools— that an algorithm which was created to see features in applicants which humans cannot still depends upon the same private information employers are intentionally prevented from acquiring.

Yet another concern regarding a business’ use of AI recruiting algorithms is the underlying motivation for their use on behalf of the business, and whether that motivation takes into account anti-bias and anti-discrimination practices at all. AI algorithm vendors originally developed these tools to expedite the recruiting process and reveal trends or similarities in data between applicants and top employees which humans cannot ascertain; in essence, these tools are in place solely to make operating a business easier and help Human Resource departments make ‘better’ decisions about who they hire. In describing the general purpose of these tools, Miranda Bogen writes, “employers turn to hiring technology to increase efficiency, and in hopes that they will find more successful–and sometimes, more diverse–employees… Most employers want to reduce time to hire, the amount of time it takes to fill an open position… Employers also want to reduce cost per hire, or the marginal cost of adding a new worker, which is roughly $4,000 in the U.S.” (Bogen 2018). Large corporations (AT&T, P&G, and Allstate are all examples of large corporations who use AI recruiting algorithms) have long been criticized for their indifference towards the wellbeing of their lower-level employees and their ‘cash is king’ operating philosophy. Bogen goes on to point out that with the ethical issues that followed the widespread incorporation of AI recruiting algorithms, businesses are now more conscious of the issues posed by the recruiting tools, but take a back seat when it comes to tangible action. Herein lies an issue of blame, where businesses and AI vendors are at a finger-pointing standstill of who is responsible for improving the algorithm technology to rid it of latent institutional bias. Both businesses and AI vendors want to minimize liability and maximize revenue (as well as recruiting efficiency), as has been the case since the beginning of time; until recently (as a result of public criticism), businesses and AI vendors have not been forced to consider the discriminatory repercussions of their employed technology (Bogen 2021). In order for AI recruiting tools to be used to their fullest potential— neutrally identifying promising applicants— the businesses which contract their use and the AI developers who write their code must both take accountability for the discriminatory shortcomings which are now in the public spotlight as well as the general ethical implications of their use.

Of the litany of ethical issues brought about by the convoluted intersection between business and AI, discriminatory hiring practices and biased AI recruiting algorithms in particular pose a pressing issue for the future diversity of the global workforce. With more and more studies and surveys coming out touting the benefits of hiring workers from diverse backgrounds, businesses now must decide how seriously they treat diversity amongst their own employees. While many businesses have chosen to invest in AI recruiting for the sake of time and cost efficiency, little has been done on behalf of both businesses and AI vendors to combat the perpetuation of recruiting bias; as these discriminatory issues continue to evolve, businesses need to be aware of where their recruiting practices fall short and when to supplement those blind spots with traditional human judgement.

License

Group Anthology Book: AI in Business Copyright © by karakoch. All Rights Reserved.