"

43 Lindsay Osborn – Artificial Intelligence and the Risks of Mis/Disinformation and Propaganda in Online Spaces

Lindsay Osborn is a senior from La Porte, Indiana. She is pursuing her Bachelor of Science in Communication Studies with a minor in Technical and Professional Writing.  “Artificial Intelligence and the Risks of Mis/Disinformation and Propaganda in Online Spaces” is an outline written for a final video presentation for the course C328 Digital Responsibilities and Rights in Fall 2023. It addresses the growing concerns about artificial intelligence and presents an argument for implementing lessons on media literacy as part of a standard curriculum.  Professor Natalia Rybas notes, “I want to celebrate Lindsay’s commitment to excellence and her skills to produce deep thinking even in low stakes assignments. She definitely take pride in what she writes and strives to make it personal and analytical. Thank you.”

Artificial Intelligence and the Risks of Mis/Disinformation and Propaganda in Online Spaces

Presentation Outline

  1. Introduction
    1. Background information: Artificial Intelligence has become a hot-button issue for society. This technology, which boasts the potential to save people time and money, has simultaneously stepped into a proverbial ethical minefield. Concerns over AI span numerous sectors—from the fear of job loss from excessive automation, the act of data-scraping the internet and training AI on personal data without permission, to the ethics of training AI on any forms of intellectual property. The pressing concern I am focusing on today, however, is the use of generative AI and the spread of misinformation and disinformation across social media.
    2. Thesis: Misinformation—the spread of false information and disinformation—the intentional spread of false information, are not new issues; however, the booming technological advancements, paired with the ease of sharing information online, is a recipe for a volatile society, particularly in matters of political stability and global diplomacy. This complex and rapidly growing issue must be addressed swiftly but must also be handled deftly and with nuance.
    3. Credibility statement: With the upcoming 2024 election and the rise of artificial intelligence, I have developed a sincere interest in AI and mass communication and wished to explore the potential steps society should take to inoculate users from the spread of propaganda and inaccurate information.
    4. Preview main points: First, I will briefly explore AI and how it can be used to spread misinformation. Then I will discuss the attempts made to combat the spread of misinformation in online spaces. Finally, I will close by exploring the importance of societal and community-based efforts to improve media literacy and critical thinking skills.
  2. Main point 1: AI as a generative tool has the ability to create and share information at a rapid rate. As such, my research has shown that the technology’s ability to create fake images, false statements, and even falsified, yet increasingly believable audio and videos of notable people and events, are a cause for considerable concern, particularly in the realm of political and societal development.
    1. A PBS report explored the upcoming potential dangers of mass communication and generative AI. They interviewed professionals involved in cybersecurity and experts involved in the creation of modern AI models, who warned that no one is prepared for the disinformation campaigns.
    2. This research explained that AI programs have been developed to clone voices—such as a presidential candidate’s voice—and that this technology can be used to provide false information about election dates, synthesize false confessions to a crime, and even create deep fake videos of leaders giving speeches or interviews they have never given.
    3. Furthermore, PBS explained that generative AI has already been used by the Republican National Committee for campaign ads. An additional report by NPR expresses the danger of using this technology irresponsibly.
    4. The RNC campaign ads featured “what if” scenarios featuring a world where Biden was elected in 2024. The commercial features manipulated videos of CNN reporters and AI-generated imagery of potential future tragedies. NPR noted that machine-generated propaganda is incredibly powerful at swaying opinions. Researchers at Stanford and Georgetown conducted an experiment to see how persuasive the technology could be. They trained their AI models to learn from articles that aligned with Russian or Iranian propaganda, and then they asked the model to generate a fake story about Saudi Arabia helping fund the U.S.–Mexico border wall and a story about Western sanctions leading to a shortage of medical supplies in Syria. Their study found that nearly half of people who read the fake stories agreed with the border wall claim, and a staggering 60% of people agreed with the AI-generated propaganda for the Syrian medical supply allegation. Researchers warn that catching AI-generated propaganda at this moment in time is difficult, and experts are still trying to figure out how to handle this spread of AI disinformation.

Transition: These issues have not gone unaddressed by governmental agencies and corporate entities. Nevertheless, corporate and/or state control over AI still have to contend with their own controversies. Next, I’ll explore the ways corporate and state actors have addressed misinformation, and why these solutions only act as a Band-Aid amid a growing crisis.

  • Main point 2: There have been attempts to combat the growing changes spurred by artificial intelligence. Governmental agencies, corporations, and tech companies have explored the ethics involved in AI, but issues may still persist.
    1. Most recently, President Biden has issued an executive order to provide a safeguard against bad actors using AI for cyberattacks and misinformation, requiring regular safety testing and encouraging the implementation of watermarks.
    2. While these goals may be beneficial to developing a safety net or an industry standard for the budding technology, there are still concerns about relying on government actors and corporations having control over datasets and online information. Fortner, in Ethics in the Digital Domain, notes that an individual lies within the middle of the context of truth. State actors and corporations benefit in having control over information, and while it can be useful in protecting national security, too much control risks global dominance. Additionally, Fortner explains that corporate and technological capabilities overlap and can take excessive control over profits. Furthermore, interlopers, who delay, disrupt, delegitimize, and destroy the world around them, have a hand at inciting greater control over mass media. Fortner shows us that as individuals, we have a smaller amount of control than the major state and corporate actors. So, what can we do as individuals to combat misinformation?

Transition: It is a complicated matter to expect corporate and governmental agencies to hold responsibility toward implementing online safety features fully and ethically—and if not handled delicately, it may impose heavy-handed restrictions to online spaces. This last section addresses my research into what communities and individuals can do to overcome the onslaught of misinformation and data consumption.

  1. Main point 3: The online world can be a tumultuous environment, and it is easy to feel like the ethics and safety of media consumption is beyond our control. However, we do possess tools to protect us.
    1. A study conducted by researchers at Dublin City University, Information and Media Literacy in the Age of AI: Options for the Future, expressed how the advancement in AI technology has altered the way researchers and educators must address digital and media literacy.
    2. This research acknowledges that AI has taught us that technology can rapidly change and that the outpouring of online data can impact the societal spectrum within days and weeks. While they note that developing media literacy skills is a complicated concept, we do have options to consider. Their article explained that one option is to implement a unified approach toward media literacy that adopts a principles-first methodology; that is, the researchers suggest focusing on developing individuals’ information and media literacy skills. Educators may be able to focus lessons on the implications of specific media and how its context may or may not personally impact the individual.
    3. Additionally, in Markham’s work in Producing Theory in a Digital World, she explains that creating a more ethical world requires future-oriented ethics, which requires a crucial level of media literacy.
    4. Markham’s research bolsters the claim of focusing on media literacy from an individual’s perspective—encouraging the practice of exploring what we wish to become and not just where we came from. This is not a form of thinking that should be limited to university and graduate-level students. Media literacy, or critical thinking in general, is a necessary skillset that should be implemented and taught at an early age to combat inappropriate consumption of misinformation.
  2. Conclusion:
    1. Summary/Final Thoughts: The age of AI has brought promises of reduced human labor and fears of fake news being created and disseminated at an increasingly rapid rate. And while every sector—governmental, corporate, tech developers, and the individuals of society—know that something needs to be done to address these concerns, we are still struggling to achieve a balance that doesn’t disrupt or harm society. As individuals, we only hold so much influence over the management of mass media; our power is in controlling our media consumption and understanding the media that we consume.
    2. Action: Ultimately, while the standalone user only has so much control over the way content is distributed and shared online, the greatest strength a community has is education. By implementing activities that bolster media literacy and critical thinking, society may not be able to stop misinformation, but rather invalidate the power it holds.

 

 

References

Fortner, R. (2021). Ethics in the Digital Domain. Roman and Littlefield: Lanham, MD.

Jingnan, H. (2023, June 29). Ai-generated text is hard to spot. it could play a big role in the 2024 campaign. NPR. www.npr.org/2023/06/29/1183684732/ai-generated-text-is-hard-to-spot-it-could-play-a-big-role-in-the-2024-campaignLinks%20to%20an%20external%20site.

Klepper, D. & Swenson, A. (2023, May 14). Ai-generated disinformation poses threat of misleading voters in 2024 election. PBS. pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election

Markham, A. (2015). Producing ethics for digital near futures. In R. A. Lind (Ed). Produsing Theory in a Digital World 2.0 (pp. 247-265). Peter Lang: New York.

Tiernan, P., Costello, E., Donlon, E., Parysz, M., & Scriney, M. (2023). Information and Media Literacy in the Age of AI: Options for the Future. Education Sciences, 13(9), 906. MDPI AG. Retrieved from http://dx.doi.org/10.3390/educsci13090906

Zahn, M. (2023, October 30). Biden executive order imposes new rules for AI. Here’s what they are. ABC News. abcnews.go.com/Business/biden-executive-order-imposes-new-rules-ai/story?id=104472977

 

 

 

 

 

License

Celebration of Student Writing 2024 Copyright © by Kelly Blewett. All Rights Reserved.