38
Julie Feighery
Analyzing AI Generated Information
Although many responses produced by AI text generators are accurate, AI also often generates misinformation. Oftentimes, the answers produced by AI will be a mixture of truth and fiction. If you are using AI-generated text for research, it will be important to be able to verify its outputs. You can use many of the skills you’d already use to fact-check and think critically about human-written sources, but some of them will have to change. For instance, we can’t check the information by evaluating the credibility of the source or the author, as we usually do. We have to use other methods, like lateral reading, which we discussed in the first unit of the class.
Remember, the AI is producing what it believes is the most likely series of words to answer your prompt. This does not mean it’s giving you the ultimate answer! When choosing to use AI, it’s smart to use it as a beginning and not an end. Being able to critically analyze the outputs that AI gives you will be an increasingly crucial skill throughout your studies and your life after graduation.
When AI gets it wrong
As of summer 2024, a typical AI model isn’t assessing whether the information it provides is correct. Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn’t – and the AI cannot interpret or distinguish between the two. It’s up to you to make the distinction.
AI can be wrong in multiple ways:
- It can give the wrong answer
- It can omit information by mistake
- It can make up completely fake people, events, and articles
- It can mix truth and fiction
Explore each section below to learn more.
Lateral Reading
If you cannot take AI-cited sources at face value and you (or the AI programmers) cannot determine where the information is sourced from, how are you going to assess the validity of what AI is telling you? Remember SIFT and Lateral Reading from the beginning of the course? These tools will help you when assessing the accuracy of what AI gives you. This short video is a nice reminder of how to do this:
Fact-Checking AI
Here’s how to fact-check something you got from ChatGPT or a similar tool:
-
- Break down the information. Take a look at the response and see if you can isolate specific, searchable claims. This is called fractionation.
- Then it’s lateral reading time! Open a new tab and look for supporting pieces of information. Here are some good sources to start with:
When searching for specific pieces of information: DuckDuckGo or Wikipedia
When seeing if something exists: Google Scholar , WorldCat One Search, or Wikipedia
Next, think deeper about what assumptions are being made here
- What did your prompt assume?
- What did the AI assume?
- Who would know things about this topic? Would they have a different perspective than what the AI is offering? Where could you check to find out?
Finally, make a judgment call. What here is true, what is misleading, and what is factually incorrect? Can you re-prompt the AI to try and fix some of these errors? Can you dive deeper into one of the sources you found while fact-checking? Remember, you’re repeating this process for each of the claims the AI made – go back to your list from the first step and keep going!
Examples of Fact-Checking AI Responses
Please click on the two video links below to see AI Fact-Checking in action.
Fact Checking AI text and links
Fact Checking AI Scholarly Articles
Source
“Assessing AI Tools” is adapted with permission from “AI and Information Literacy .” by the University of Maryland Libraries and the Teaching and Learning Transformation Center. CC BY-NC 4.0