Hallucination
Module: fundamentals
What it is
Hallucination is when an AI model generates information that is factually incorrect, made up, or nonsensical while presenting it confidently as fact. The model isn't lying—it's generating plausible-sounding text based on patterns, without actually knowing whether the content is true.
Why it matters
Hallucination is a critical limitation to understand. AI can confidently cite non-existent sources, invent statistics, or misstate facts. Always verify important information from AI, especially specific claims, citations, or technical details. Hallucination isn't a bug that will be completely fixed—it's inherent to how language models work.