"

5 Chapter 5-Sung A Bae

Sung A Bae

Evidence And Analysis: LAMP-M301

Vivian Halloran

November 14, 2021

The Effect of Algorithms on COVID-19

Ever since AI has become active and spreading to people, it has helped humans in so many directions giving so much convenience, including the COVID 19. For example, An algorithm developed in the United States accurately identified COVID-19 patients only with the sound of coughing. This algorithm’s accuracy is 98.5%. Last December, AI has helped predict the widespread of the COVID 19, the truth of COVID 19, through deep learning AI. Even though AI was so automated and helpful, why is it not used in a broader and bigger spectrum? There are a few reasons why. First, Algorithms are still defective and unstable in COVID-19. For instance, the spreading of COVID 19, the prediction AI algorithm model was not effective. Also, there is a problem with the dataset used in the algorithm. Secondly, some algorithms are biased. Having an AI algorithm biased may give a false statement that might provide us with wrong conclusions.

Even though the AI has helped us predict COVID 19, it has not been effective when we tried to predict how much and which way the COVID will spread through the prediction AI algorithm model. We analyzed 232 numbers of AI models but were not fit for clinical usage, including COVID 19. However, not all the models tested were incompetent. Two models were considered and concluded as “worth doing more research.” This result shows that the AI algorithm model we are currently making is still unstable and not to be trusted entirely yet. Not just prediction, “With no standardization, AI algorithms for COVID-19 have been developed with a very broad range of applications, data collection procedures, and performance assessment metrics. Perhaps, as a result, none are currently ready to be deployed clinically.” (Roberts et al., 199) Compared to the created AI algorithm, the number of algorithms that have reached clinical trials is significantly smaller. The reasons are biased in small datasets, volatility in large-scale international data, poor integration, and prediction of multi stream data (of imaging data), and finally difficulties in the implementation of developed algorithms in routine clinical treatments.

Algorithms can only analyze after learning the previous and original data and based on those data. Therefore, for researchers to get accurate conclusions and results, we need accurate data for the algorithms to use. However, since the data used on COVID 19 were given by doctors who were too busy looking after COVID patients, the data provided cannot be accurate. A review states, “Many of the uncovered problems are linked to the poor quality of the data that researchers used to develop their tools. Information about COVID patients, including medical scans, was collected, and shared in the middle of a global pandemic, often by the doctors struggling to treat those patients. Researchers wanted to help quickly, and these were the only public data sets available. But this meant that many tools were built using mislabeled data or data from unknown sources.” (Heaven, MIT Technology Review). This quotation shows that the data itself has flaws and lacks accuracy, making the results more inaccurate and untrustworthy.

When AI models were being constructed, they could only lack concision and accuracy, especially when it came to medics. The researchers for Algorithm models can only lack knowledge about medical studies, which can only lead to models lacking for medical usages, even though the models were made for such specific use. “Many tools were developed either by AI researchers who lacked the medical expertise to spot flaws in the data or by medical researchers who lacked the mathematical skills to compensate for those flaws.” (Heaven, MIT Technology Review) However, the lack of medical knowledge of the researchers was not the only problem. According to Rustagi, Stanford Social Innovation Review, “Data on risk and mortality is not sufficiently disaggregated by sex, race, or ethnicity.” This shows that the data we collect from the AI models cannot be disaggregated, and it is hard to organize the information we have collected. Other statements given by Rustagi are, “Data for racial and ethnic groups is incomplete, and terms and labels are inconsistent” and “COVID-19 data tracking systems aren’t capturing data on immigrants and other marginalized populations”. These quotations and statements given from Rustagi show that the information and data we have collected and are currently collecting cannot be entirely trusted. We may be able to get a vague idea and trust the information on humans as a whole, but the data we collect cannot be organized and aren’t yet to be trusted when we look deeper into sex, race, ethnicity, etc. The second quotation states that the terms and labels are inconsistent, meaning that if the data we have collected from the AI models were consistent in such specific directions, the data collected might be a little more trustworthy and usable for the future. However, since it is not compatible at all, the data collected from the AI cannot be entirely trusted.

Lastly, some algorithms are biased. Based on the learnings and previous readings, we know that bias algorithms start tendency in our human society. If we were to use those biased algorithms, especially on such sensitive topics as the current COVID 19, it would not make the current situation better but only make people blame each other and start to decide who is wrong and who is right. This will cause massive inequality in future generations. According to Eubanks, “Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the health-care system, or cross-national borders. That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny. Those groups seen as undeserving are singled out for punitive public policy and more intense surveillance, and the cycle begins again.”(Eubanks, 6). What kind of bias and inequality did this specific algorithm create? C. Sunstein, an American jurist, calls this an “Echo Chamber,” which means that people only tend to talk and communicate with those with similar thoughts and beliefs. Same with people, AI algorithms tend to go with content that the AI considers better and become more and more biased. When Checking COVID, “During clinical decision making, for example, well established biases against members of marginalized groups, such as African American and LGBT1718 patients, can enter the clinical notes taken by healthcare workers during and after examination or treatment. If these free text notes are then used by natural language processing technologies to pick up symptom profiles or phenotypic characteristics, the real-world biases that inform them will be silently tracked as well.” (Leslie, The BMJ) This quotation also shows how dangerous and untrustworthy a bias AI algorithm can be.

AI and machines are more than trustworthy when it is based on truth, facts, and numbers. However, when it is not and is based on predictions, none of it should be trusted since predictions can only give predictions. The first reason shown was the problem of current data being inaccurate and untrustworthy. If the data they are based on is trustworthy, the results will also be full of facts and trust. However, if it is not, then there can only be predictions. Second, not just the data, but the algorithm itself lacks accuracy. The AI researchers are not the most knowledgeable in studies on medics and clinics, meaning the AI made by those researchers cannot be fully functional as a medical AI. Lastly, the current algorithms tend to be biased, making both data and AI untrustworthy since bias only makes inequality and unnecessary predictions. If so, biased and unreliable information and data are used on such vital issues as the current COVID. Doing so will only give people chaos. So, suppose more and more reliable data full of facts are gained. In that case, the clinical researchers and AI researchers working together to make AI models that are entirely trustworthy will give people hope and help us keep living on. Due to data on algorithms, the answer to COVID 19 cannot be given yet, but we are gaining more and more data meaning that there is hope that the problem of COVID 19 will be solved one day.

 

 

 

 

 

 

 

Work Cited

“Artificial Intelligence and Covid-19.” The BMJ, www.bmj.com/AIcovid19.

Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Picador, 2019.

Heaven, Will Douglas. “Hundreds of AI Tools Have Been Built to Catch Covid. None of Them Helped.” MIT Technology Review, MIT Technology Review, 30 July 2021, www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/.

Hwang, Hyung Ju, et al. Development of Patients Triage Algorithm from Nationwide COVID-19 Registry Data Based on Machine Learning. 2021. EBSCOhost, search-ebscohost-com.proxyiub.uits.iu.edu/login.aspx?direct=true&db=edsarx&AN=edsarx.2109.09001&site=eds-live&scope=site.

Khemasuwan, Danai, and Henri G Colt. “Applications and Challenges of AI-Based Algorithms in the COVID-19 Pandemic.” BMJ Innovations, BMJ Specialist Journals, 1 Apr. 2021, innovations.bmj.com/content/7/2/387.

Leslie, David, et al. “Does ‘Ai’ Stand for Augmenting Inequality in the Era of Covid-19 Healthcare?” The BMJ, British Medical Journal Publishing Group, 16 Mar. 2021, www.bmj.com/content/372/bmj.n304.

Roberts, Michael, et al. Common Pitfalls and Recommendations for Using Machine Learning to Detect and Prognosticate for COVID-19 Using Chest Radiographs and CT Scans. 2020. EBSCOhost, doi:10.1038/s42256-021-00307-0.

Röösli, Eliane, et al. “Bias at Warp Speed: How AI May Contribute to the Disparities Gap in the Time of COVID-19.” Journal of the American Medical Informatics Association, vol. 28, no. 1, Jan. 2021, pp. 190–192. EBSCOhost, doi:10.1093/jamia/ocaa210.

Rustagi, Genevieve Smith & Ishita, et al. “The Problem with Covid-19 Artificial Intelligence Solutions and How to Fix Them (SSIR).” Stanford Social Innovation Review: Informing and Inspiring Leaders of Social Change, ssir.org/articles/entry/the_problem_with_covid_19_artificial_intelligence_solutions_and_how_to_fix_them#.

License

AI + Medicine Copyright © by Vivian Halloran. All Rights Reserved.