SaGE: Evaluating Moral Consistency in Large Language Models

dc.contributor.authorBonagiri, Vamshi Krishna
dc.contributor.authorVennam, Sreeram
dc.contributor.authorGovil, Priyanshul
dc.contributor.authorKumaraguru, Ponnurangam
dc.contributor.authorGaur, Manas
dc.date.accessioned2024-03-06T18:52:21Z
dc.date.available2024-03-06T18:52:21Z
dc.date.issued2024-02-21
dc.description.abstractDespite recent advancements showcasing the impressive capabilities of Large Language Models (LLMs) in conversational systems, we show that even state-of-the-art LLMs are morally inconsistent in their generations, questioning their reliability (and trustworthiness in general). Prior works in LLM evaluation focus on developing ground-truth data to measure accuracy on specific tasks. However, for moral scenarios that often lack universally agreed-upon answers, consistency in model responses becomes crucial for their reliability. To address this issue, we propose an information-theoretic measure called Semantic Graph Entropy (SaGE), grounded in the concept of "Rules of Thumb" (RoTs) to measure a model's moral consistency. RoTs are abstract principles learned by a model and can help explain their decision-making strategies effectively. To this extent, we construct the Moral Consistency Corpus (MCC), containing 50K moral questions, responses to them by LLMs, and the RoTs that these models followed. Furthermore, to illustrate the generalizability of SaGE, we use it to investigate LLM consistency on two popular datasets -- TruthfulQA and HellaSwag. Our results reveal that task-accuracy and consistency are independent problems, and there is a dire need to investigate these issues further.
dc.description.urihttp://arxiv.org/abs/2402.13709
dc.format.extent12 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m2cxkm-sopd
dc.identifier.urihttps://doi.org/10.48550/arXiv.2402.13709
dc.identifier.urihttp://hdl.handle.net/11603/31847
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsCreative Commons Attribution 4.0 International (CC BY 4.0)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectComputer Science - Artificial Intelligence
dc.subjectComputer Science - Computation and Language
dc.titleSaGE: Evaluating Moral Consistency in Large Language Models
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2402.13709.pdf
Size:
2.83 MB
Format:
Adobe Portable Document Format