Click here to flash read.
This research paper focuses on the challenges posed by hallucinations in
large language models (LLMs), particularly in the context of the medical
domain. Hallucination, wherein these models generate plausible yet unverified
or incorrect information, can have serious consequences in healthcare
applications. We propose a new benchmark and dataset, Med-HALT (Medical Domain
Hallucination Test), designed specifically to evaluate and reduce
hallucinations. Med-HALT provides a diverse multinational dataset derived from
medical examinations across various countries and includes multiple innovative
testing modalities. Med-HALT includes two categories of tests reasoning and
memory-based hallucination tests, designed to assess LLMs's problem-solving and
information retrieval abilities.
Our study evaluated leading LLMs, including Text Davinci, GPT-3.5, LlaMa-2,
MPT, and Falcon, revealing significant differences in their performance. The
paper provides detailed insights into the dataset, promoting transparency and
reproducibility. Through this work, we aim to contribute to the development of
safer and more reliable language models in healthcare. Our benchmark can be
found at medhalt.github.io