AI hallucinations occur when a tool confidently provides false or made-up information. These are common with large language models like ChatGPT.
Examples:
Inventing citations or sources
Providing incorrect medical facts
Misinterpreting context or user intent
How to Avoid Hallucinations:
Always verify AI outputs with trusted sources (e.g., PubMed, Cochrane Library, textbooks)
Use AI for support, not substitution — especially in academic and clinical settings
Ask for sources, then cross-check
Don’t use AI for critical medical decisions unless it's part of an approved clinical decision-support system
When reviewing AI-generated content:
Check accuracy against peer-reviewed literature
Assess bias or ethical concerns
Review readability and coherence
Look for source citations (real and verifiable)
Use the CRAAP test: