Hallucination
The generation of outputs by an AI model that are plausible-sounding but factually incorrect, fabricated or not supported by the input data or training data.
In Plain Language
When AI confidently makes something up. It might state a fake statistic, cite a non-existent research paper or invent a historical event; all while sounding completely sure of itself.
Why This Matters
Hallucination is one of the most significant risks in deploying generative AI. Your governance framework must include controls for detecting, mitigating and disclosing hallucination risks, particularly where AI outputs inform business decisions or customer interactions.
.png)
