Sep 6 (edited) • 💬 General
Why Language Models Hallucinate - OpenAI
"Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty.
Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because training and eval procedures reward guessing, acknowledging uncertainty and we analyze the statistical causes of hallucinations in the modern training pipeline.
Hallucinations need not be mysterious - they originate simply as errors in binary classification. If incorrect states cannot be distinguished from facts, then hallucination in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are gradeded - language models are optimized to be good test-takers and guessing when uncertain improves test performance.
This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field trustworthy AI systems."
10
6 comments
Marcio Pacheco
7
Why Language Models Hallucinate - OpenAI
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
Powered by