"

3 Automated Biases: How AI Values are Reproducing Social Biases

Many new and different forms of AI are becoming more widely used in automated application systems for a magnitude of different application processes. This fast-growing technology has posed many threats to low-income families, individuals of color, and individuals of different sexual orientations. It is important in understanding how these systems work, and acknowledge that despite them being machines, they are and are still very capable of being discriminatory to certain minority groups due to the programmers that create these algorithms. The effects of this digital discrimination are life long and in some scenarios life threatening. With more and more algorithms being created daily, it is up to us as a society to address this situation before it becomes out of our control. People assume AI is value neutral when it actually has the tendency to reproduce social biases which causes harm when people don’t oversee and check it.

There is a very common misconception that because artificial intelligence systems are not human, they cannot possess discriminatory and racist values. This is in fact false, as artificial intelligence is programmed by humans, and therefore possess traces of the programmer. In Discrimination, Artificial Intelligence, and Algorithmic Decision-Making, Professor Frederik Zuiderveen Borgesius comments “AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality, we probably need additional regulation to protect fairness and human rights in the area of AI” (Borgesius “Executive Summary”). This discrimination exists in many different forms. In her book, Automating Inequality, Dr. Virginia Eubanks addresses several automated systems guilty of discrimination. Kim Stipes, a subject of one of Eubanks interviews, falls victim to automated inequality and “lost her Medicaid benefits during Indiana’s experiment with welfare eligibility automation” (Eubanks 39). Because she couldn’t afford health insurance for herself and prioritized that money towards health insurance for her children, she had to cancel her coverage all together, despite applying for a plan that aids the poor. Kim fell victim to commonly used AI in healthcare applications, with the most common being machine learning AI, natural language processing, rule based expert systems, and robotic process automation. These AI are commonly perceived and understood as making it very challenging for the applicant to achieve an accepted status due to the programs complicated format.  It is to no surprise that “aspects related to income, location or lifestyle may also lead to digital discrimination. A very clear example of intentional direct discrimination is the current practice of targeting low-income population with high interest loans… there is an increasing risk of not considering particular disadvantaged groups who may not be able to participate in data collection processes… this has the potential to not only ignore the needs and views of these marginalized groups in critical policy-making and industry-related decisions about housing, health care, education, and so on; but also to perpetuate the existing disadvantages” (Criado 6). Targeting the low-income here creates a barrier of entry that is almost impossible to past. This discrimination is being “masked” under these AI programs that make it look like healthcare applications are incredibly involved and difficult to qualify for, when in fact these programs are targeting poor individuals. It comes again at no surprise that u/thedrakeequator, much like Kim Stipes, also experienced hardship when applying for the Healthy Indiana plan. In user commented on a reddit review post “I applied in November and it took them 3 months to review my application… Now I have to reapply, which I’m fine with because they said they would expedite my application… but what I want to ask, is there a light at the end of the tunnel? Have any of you successfully navigated this mess? It seems intentionally designed to be difficult” (u/thedrakeequator Reddit). These reddit threads serve as examples of more severe, larger patterns of AI-enabled discrimination of which the United States fails to rectify known bias in state-sponsored, AI-mediated medical insurance systems.

Despite the obvious effects of Kim Stipes and other applicants of low-income not receiving the benefits of the Healthy Indiana plan because of digital discrimination, there are many additional “domino effects” that result in part of the denial. Returning to the Kim Stipes case, the denial of public resources through the Healthy Indiana plan due to digital discrimination posed serious health threats to Sophie. Without Medicaid, “Sophie’s care would have been financially overwhelming. Her formula was incredibly expensive. She needed specialized diapers for older children with development delays” (Eubanks 41). In denying the Stipes family of necessary public resources for their daughter’s survival, digital discrimination posed serious life-threatening effects to a 6-year-old. When an individual is unjustifiably denied from healthcare programs due to digital discrimination, it puts the applicant at a risk of losing their life. Digital discrimination also prevents low-income families from climbing out of financial trouble, as the families denied due to digital discrimination ultimately have to pay out of pocket for their medical expenses. Ironically, the arguments supporting the use of artificial intelligence “often point to its potential to stimulate economic growth-increased productivity at lower costs, a higher GDP per capita, and job creation” (Akselrod “How Artificial Intelligence Can Deepen Racial and Economic Inequalities), when in fact it is slowing economic growth and prioritizing those who can afford public resources over the individuals who can’t afford them. These effects are long lasting and “rather than help eliminate discriminatory practices, AI has worsened them-hampering the economic security of marginalized groups that have long dealt with systemic discrimination” (Akselrod “How Artificial Intelligence Can Deepen Racial and Economic Inequalities). Aside from more financial centered effects of digital discrimination in healthcare, racism is also a major factor in healthcare AI algorithms. According to a 2020 New England Journal of Medicine article titled “Yale Journal of Health Policy, Law, and Ethics”, “algorithms have regularly underestimated African Americans’ risk of kidney stones, death from heart failure, and other medical problems” (Hoffman and Podgurski). In this context, individuals of color may be targeted in healthcare AI and run the risk of not receiving equal medical treatment and therefore may suffer more medical related issues. Despite the obvious effects of not being accepted for an applicant healthcare program, there are many long term discriminatory and racist effects that will reoccur as result of digital discrimination.

While there are many statues against AI banning discrimination against race, gender, or sexual orientation, “the statues do not apply to discrimination on the basis of financial status for instance. Data protection law can help fill some, but definitely not all, gaps in non-discrimination law” (Borgesius 66). The challenges surrounding regulation in digital discrimination revolve around the fact that AI is a fast-developing technology. It is what Professor Frederik Zuiderveen Borgesius highlights “are not unique for AI; there is experience with regulating new technologies” (Borgesius 61). Perhaps the most realistic way to prevent digital discrimination now is held within the morals and ethics of the programmers at these companies. The implementation of guidelines per company could be “amended faster and can thus be more specific and concrete. Guidelines should be evaluated regularly and amended whenever required” (Borgesius 62). Guidelines and regulations also will work to prevent transparency within companies and help further expose these racist and discriminatory programs that are to blame for this injustice. Transparency is often used to negate the existence of discrimination in these programs. It was witnessed earlier when Kim didn’t know she was signing to cancel her insurance policy for her family, and it was also witnessed from the many comments on the Healthy Indiana plan claiming that the application was “intentionally difficult and long”.

Automated biases have posed significant threats to low-income families, individuals of color, and individuals of different sexual orientations. In understanding how these systems work, our society can move away from the preconceived notion that AI is value neutral and instead push back against social biases reproduced through the algorithms of its programmers. Even a small change, such as our state legislators rectifying the known biases reproduced in state-sponsored, AI-mediated medical insurance systems would make a major difference in our society. On the surface, these AI systems are efficient, advanced, and (are meant) to be precise. In reality, our society can’t effectively utilize these systems until there is social change, and many of these systems, ironically, have worsened economic security of marginalized groups that have already experienced the pressure of systemic discrimination. The effects of digital discrimination are long-lasting, and in many cases can pose life threatening scenarios. With more systems being created daily, it is up to us as a society to move past the stigma surrounding the idea of value neutral AI, and see AI for what it is, a reproduction of values passed down digitally in the codes created that reflect the same values society possesses today.

 

 

 

Works Cited

 

“ACLU News & Commentary.” American Civil Liberties Union, https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities/.

Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Picador, 2019.

Discrimination, and Algorithmic – Coe. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73.

Digital Discrimination – Researchgate. https://www.researchgate.net/profile/Jose_Such2/publication/336792693_Digital_Discrimination/links/5e0da14aa6fdcc28374ff8b4/Digital-Discrimination.pdf.

Hoffman, Sharona, and Andy Podgurski. “Artificial Intelligence and Discrimination in Health Care.” Yale Law School Legal Scholarship Repository, https://digitalcommons.law.yale.edu/yjhple/vol19/iss3/1/.

“R/Indiana – Healthy Indiana Plan, Anyone Having Any Luck with It?” Reddit, https://www.reddit.com/r/Indiana/comments/f25hs1/healthy_indiana_plan_anyone_having_any_luck_with/.

License

AI + Privacy F21 LAMP-M301 Copyright © by Vivian Halloran. All Rights Reserved.