Trapping LLM Hallucinations Using Tagged Context Prompts

dc.contributor.authorFeldman, Philip
dc.contributor.authorFoulds, James
dc.contributor.authorPan, Shimei
dc.date.accessioned2024-02-27T17:45:27Z
dc.date.available2024-02-27T17:45:27Z
dc.date.issued2023-06-09
dc.description.abstractRecent advances in large language models (LLMs), such as ChatGPT, have led to highly sophisticated conversation agents. However, these models suffer from "hallucinations," where the model generates false or fabricated information. Addressing this challenge is crucial, particularly with AI-driven platforms being adopted across various sectors. In this paper, we propose a novel method to recognize and flag instances when LLMs perform outside their domain knowledge, and ensuring users receive accurate information. We find that the use of context combined with embedded tags can successfully combat hallucinations within generative language models. To do this, we baseline hallucination frequency in no-context prompt-response pairs using generated URLs as easily-tested indicators of fabricated data. We observed a significant reduction in overall hallucination when context was supplied along with question prompts for tested generative engines. Lastly, we evaluated how placing tags within contexts impacted model responses and were able to eliminate hallucinations in responses with 98.88% effectiveness.
dc.description.urihttps://arxiv.org/abs/2306.06085
dc.format.extent14 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m20k3c-smqk
dc.identifier.urihttps://doi.org/10.48550/arXiv.2306.06085
dc.identifier.urihttp://hdl.handle.net/11603/31712
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.titleTrapping LLM Hallucinations Using Tagged Context Prompts
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0001-6164-6620
dcterms.creatorhttps://orcid.org/0000-0003-0935-4182
dcterms.creatorhttps://orcid.org/0000-0002-5989-8543

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2306.06085.pdf
Size:
341.13 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: