Skip to Main Content

Artificial Intelligence (AI): Quality of Information & Hallucinations

What are AI "Hallucinations"?

AI hallucinations occur when a tool confidently provides false or made-up information. These are common with large language models like ChatGPT. 

Examples: 

  • Inventing citations or sources 

  • Providing incorrect medical facts 

  • Misinterpreting context or user intent 

How to Avoid Hallucinations: 

  • Always verify AI outputs with trusted sources (e.g., PubMed, Cochrane Library, textbooks) 

  • Use AI for support, not substitution — especially in academic and clinical settings 

  • Ask for sources, then cross-check 

  • Don’t use AI for critical medical decisions unless it's part of an approved clinical decision-support system 

Evaluating AI-Generated Content

When reviewing AI-generated content: 

  • Check accuracy against peer-reviewed literature 

  • Assess bias or ethical concerns 

  • Review readability and coherence 

  • Look for source citations (real and verifiable) 

Use the CRAAP test:

  • Currency: Is the information up to date?
  • Relevance: Does the information relate to your research topic?
  • Authority: Who is the author or publisher, and what are their credentials?
  • Accuracy: Is the information reliable and supported by evidence?
  • Purpose: What is the intent behind the information? Is it to inform, persuade, entertain, or sell?

When is it Safe to use Generative AI?

Here is a diagram to help you answer the question!

*Image adapted from “Is it safe to use ChatGPT for your task?” by Aleksandr Tiulkanov. Used under a Creative Commons CC-BY license*