Chat GPT declared that an Australian mayor served time in prison for bribery, but do you believe it?[1]
One of the considerations that comes with the use of generative AI platforms, such as Copilot, is the need to verify all output for accuracy in case of what is known as hallucinations. But what are these hallucinations, and why do they happen?
Hallucinations in the context of generative AI is the phenomenon where these systems will output content that may sound plausible but is improbable, inappropriate, or plain wrong. Hallucinations can be caused by low-quality, incorrect, or biased training data; a lack of context in a prompt; or prompts that incorporate colloquialisms or sarcasm that might be misinterpreted by the AI.[2] While AI researchers and developers are working to reduce AI hallucinations through fine-tuning language models and creating output safeguards, it is important for end users to examine output from any generative AI platform.
Some hallucinations are easily spotted, as when ChatGPT was asked "When was the Golden Gate Bridge transported for the second time across Egypt?" It responded, "The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.[3]" Other hallucinations are more difficult to spot. For example, ChatGPT created citations to non-existent cases for a New York attorney who did not check to see if the cases were real before submitting the legal brief to a federal judge.[4] These hallucinations may cause some to chuckle, but there are other instances where an AI hallucination may be dangerous when you consider AI is being deployed in self-driving cars and health care systems.
What strategies can you employ to identify any hallucinations you might encounter? A combination of critical thinking and fact checking is vital to spot when the content you encounter may not be entirely reliable. These strategies are key parts of digital literacy practices, which encourage critically engaging with information and content found online (so check with your librarians for more information on this!) and take a look at the November SCIT Community meeting, “Information literacy in the age of AI.”
Here are five strategies to keep in mind when interacting with AI:
- Examine your output for common sense answers. AI-generated content can include odd or highly improbable scenarios, characters, or events.
- Look for responses that seem out of context or like part of a different conversation than the one you’re conducting.
- Watch for internal consistency. Depending on the size of the AI’s memory, previous prompts will be forgotten as a conversation is held.
- Be especially cautious of specific facts like names, dates, and locations. These should be fact checked against authoritative sources.
- Consult with experts. While AI may be convenient, consulting with a real human expert can help you identify inaccuracies or inconsistencies typical of AI hallucinations.
It’s important to note that AI doesn’t “know” anything. The way AI perceives language is in a mathematical relationship with other words. There is no meaning to the words that it is producing, only a probability that the output is logically the correct answer to a prompt. Generative AI is only your assistant; you are the human in charge and responsible for anything created with AI.
These tools and features may change in the future as generative AI continues to be developed. As we continue to explore generative AI, we’re open to learning more about how you are using it. Faculty and staff can email their AI experiences, questions, and suggestions to ai-feedback@uiowa.edu.
Further reading
Edwards, B. (2023, April 6). Unsafe at any Seed - Why ChatGPT and Bing Chat are so good at making things up. Retrieved from https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/
Metz, C. (2023, March 29). What makes A.I. chatbots go wrong? The New York Times. Retrieved from https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html
Mollick, E. (2023, July 15). How to Use AI to Do Stuff: An Opinionated Guide. One Useful Thing. Retrieved from https://www.oneusefulthing.org/p/how-to-use-ai-to-do-stuff-an-opinionated
Tiernan, P.; Costello, E.; Donlon, E.; Parysz, M.; Scriney, M. Information and Media Literacy in the Age of AI: Options for the Future. Educ. Sci. 2023, 13, 906. https://doi.org/10.3390/educsci13090906
What Are AI Hallucinations? (n.d.)1. Retrieved from https://www.ibm.com/topics/ai-hallucinations
Sources
[1] https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/
[2] https://www.techopedia.com/definition/ai-hallucination
[3] https://www.economist.com/by-invitation/2022/09/02/artificial-neural-networks-today-are-not-conscious-according-to-douglas-hofstadter
[4] https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html