Evaluating Consistency and Reasoning Capabilities of Large Language Models

dc.contributor.authorSaxena, Yash
dc.contributor.authorChopra, Sarthak
dc.contributor.authorTripathi, Arunendra Mani
dc.date.accessioned2026-02-12T16:44:18Z
dc.date.issued2024-07-18
dc.description2024 Second International Conference on Data Science and Information System (ICDSIS), Hassan, India, May 17, 2024
dc.description.abstractLarge Language Models (LLMs) are extensively used today across various sectors, including academia, research, business, and finance, for tasks such as text generation, summarization, and translation. Despite their widespread adoption, these models often produce incorrect and misleading information, exhibiting a tendency to hallucinate. This behavior can be attributed to several factors, with consistency and reasoning capabilities being significant contributors. LLMs frequently lack the ability to generate explanations and engage in coherent reasoning, leading to inaccurate responses. Moreover, they exhibit inconsistencies in their outputs. This paper aims to evaluate and compare the consistency and reasoning capabilities of both public and proprietary LLMs. The experiments utilize the Boolq dataset as the ground truth, comprising questions, answers, and corresponding explanations. Queries from the dataset are presented as prompts to the LLMs, and the generated responses are evaluated against the ground truth answers. Additionally, explanations are generated to assess the models’ reasoning abilities. Consistency is evaluated by repeatedly presenting the same query to the models and observing for variations in their responses. For measuring reasoning capabilities, the generated explanations are compared to the ground truth explanations using metrics such as BERT, BLEU, and F-1 scores. The findings reveal that proprietary models generally outperform public models in terms of both consistency and reasoning capabilities. However, even when presented with basic general knowledge questions, none of the models achieved a score of 90% in both consistency and reasoning. This study underscores the direct correlation between consistency and reasoning abilities in LLMs and highlights the inherent reasoning challenges present in current language models.
dc.description.urihttps://ieeexplore.ieee.org/document/10594233
dc.format.extent5 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m2x8n9-otva
dc.identifier.citationSaxena, Yash, Sarthak Chopra, and Arunendra Mani Tripathi. “Evaluating Consistency and Reasoning Capabilities of Large Language Models.” 2024 Second International Conference on Data Science and Information System (ICDSIS), July 18, 2024. https://doi.org/10.1109/ICDSIS61070.2024.10594233.
dc.identifier.urihttps://doi.org/10.1109/ICDSIS61070.2024.10594233
dc.identifier.urihttp://hdl.handle.net/11603/41879
dc.language.isoen
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Student Collection
dc.rights© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectUMBC KAI2 Knowledge-infused AI and Inference lab
dc.titleEvaluating Consistency and Reasoning Capabilities of Large Language Models
dc.typeText

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2404.16478v1.pdf
Size:
363.63 KB
Format:
Adobe Portable Document Format