SymLoc: Symbolic Localization of Hallucination across HaluEval and TruthfulQA

dc.contributor.authorLamba, Naveen
dc.contributor.authorTiwari, Sanju
dc.contributor.authorGaur, Manas
dc.date.accessioned2026-01-06T20:51:53Z
dc.date.issued2025-11-18
dc.description.abstractLLMs still struggle with hallucination, especially when confronted with symbolic triggers like modifiers, negation, numbers, exceptions, and named entities. Yet, we lack a clear understanding of where these symbolic hallucinations originate, making it crucial to systematically handle such triggers and localize the emergence of hallucination inside the model. While prior work explored localization using statistical techniques like LSC and activation variance analysis, these methods treat all tokens equally and overlook the role symbolic linguistic knowledge plays in triggering hallucinations. So far, no approach has investigated how symbolic elements specifically drive hallucination failures across model layers, nor has symbolic linguistic knowledge been used as the foundation for a localization framework. We propose the first symbolic localization framework that leverages symbolic linguistic and semantic knowledge to meaningfully trace the development of hallucinations across all model layers. By focusing on how models process symbolic triggers, we analyze five models using HaluEval and TruthfulQA. Our symbolic knowledge approach reveals that attention variance for these linguistic elements explodes to critical instability in early layers (2-4), with negation triggering catastrophic variance levels, demonstrating that symbolic semantic processing breaks down from the very beginning. Through the lens of symbolic linguistic knowledge, despite larger model sizes, hallucination rates remain consistently high (78.3%-83.7% across Gemma variants), with steep attention drops for symbolic semantic triggers throughout deeper layers. Our findings demonstrate that hallucination is fundamentally a symbolic linguistic processing failure, not a general generation problem, revealing that symbolic semantic knowledge provides the key to understanding and localizing hallucination mechanisms in LLMs.
dc.description.urihttp://arxiv.org/abs/2511.14172
dc.format.extent10 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m2iuby-xdzr
dc.identifier.urihttps://doi.org/10.48550/arXiv.2511.14172
dc.identifier.urihttp://hdl.handle.net/11603/41385
dc.language.isoen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectUMBC Ebiquity Research Group
dc.subjectComputer Science - Artificial Intelligence
dc.subjectComputer Science - Computation and Language
dc.titleSymLoc: Symbolic Localization of Hallucination across HaluEval and TruthfulQA
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2511.14172v1.pdf
Size:
2.79 MB
Format:
Adobe Portable Document Format