Exploring connections between auditory hallucinations and language model structures and functions
Loading...
Author/Creator
Author/Creator ORCID
Date
2024-06-06
Department
Program
Citation of Original Publication
Allen, Janerra D., Luke Xia, L. Elliot Hong, and Fow-Sen Choa. “Exploring Connections between Auditory Hallucinations and Language Model Structures and Functions.” In Smart Biomedical and Physiological Sensor Technology XXI, 13059:58–68. SPIE, 2024. https://doi.org/10.1117/12.3013964.
Rights
©2024 Society of Photo-Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
Subjects
Abstract
Auditory hallucinations are a hallmark symptom of mental disorders such as schizophrenia, psychosis, and bipolar disorder. The biological basis for auditory perceptions and hallucinations, however, is not well understood. Understanding hallucinations may broadly reflect how our brains work — namely, by making predictions about stimuli and the environments that we navigate. In this work, we would like to use a recently developed language model to help the understanding of auditory hallucinations. Bio-inspired Large Language Models (LLMs) such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT) can generate next words based on previously generated words from the embedded space and their pre-trained library with or without inputs. The generative nature of neural networks in GPT (like self-attention) can be analogously associated with the neurophysiological sources of hallucinations. Functional imaging studies have revealed that the hyperactivity of the auditory cortex and the disruption between auditory and verbal network activity may underlie auditory hallucinations’ etiology. Key areas involved in auditory processing suggest that regions involved in verbal working memory and language processing are also associated with hallucinations. Auditory hallucinations reflect decreased activity in verbal working memory and language processing regions, including the superior temporal and inferior parietal regions. Parallels between auditory processing and LLM transformer architecture may help to decode brain functions on meaning assignment, contextual embedding, and hallucination mechanisms. Furthermore, an improved understanding of neurophysiological functions and brain architecture would bring us one step closer to creating human-like intelligence.