REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs

dc.contributor.authorTilwani, Deepa
dc.contributor.authorSaxena, Yash
dc.contributor.authorMohammadi, Ali
dc.contributor.authorRaff, Edward
dc.contributor.authorSheth, Amit
dc.contributor.authorParthasarathy, Srinivasan
dc.contributor.authorGaur, Manas
dc.date.accessioned2024-05-29T14:38:17Z
dc.date.available2024-05-29T14:38:17Z
dc.date.issued2024-05-09
dc.description.abstractAutomatic citation generation for sentences in a document or report is paramount for intelligence analysts, cybersecurity, news agencies, and education personnel. In this research, we investigate whether large language models (LLMs) are capable of generating references based on two forms of sentence queries: (a) Direct Queries, LLMs are asked to provide author names of the given research article, and (b) Indirect Queries, LLMs are asked to provide the title of a mentioned article when given a sentence from a different article. To demonstrate where LLM stands in this task, we introduce a large dataset called REASONS comprising abstracts of the 12 most popular domains of scientific research on arXiv. From around 20K research articles, we make the following deductions on public and proprietary LLMs: (a) State-of-the-art, often called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass percentage (PP) to minimize the hallucination rate (HR). When tested with Perplexity.ai (7B), they unexpectedly made more errors; (b) Augmenting relevant metadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented generation (RAG) using Mistral demonstrates consistent and robust citation support on indirect queries and matched performance to GPT-3.5 and GPT-4. The HR across all domains and models decreased by an average of 41.93%, and the PP was reduced to 0% in most cases. In terms of generation quality, the average F1 Score and BLEU were 68.09% and 57.51%, respectively; (d) Testing with adversarial samples showed that LLMs, including the Advance RAG Mistral, struggle to understand context, but the extent of this issue was small in Mistral and GPT-4-Preview. Our study contributes valuable insights into the reliability of RAG for automated citation generation tasks.
dc.description.urihttp://arxiv.org/abs/2405.02228
dc.format.extent24 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m2pygh-5qbw
dc.identifier.urihttps://doi.org/10.48550/arXiv.2405.02228
dc.identifier.urihttp://hdl.handle.net/11603/34329
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.rightsCC BY-NC-ND 4.0 DEED Attribution-NonCommercial-NoDerivs 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectComputer Science - Artificial Intelligence
dc.subjectComputer Science - Computation and Language
dc.subjectComputer Science - Information Retrieval
dc.titleREASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs
dc.typeText
dcterms.creatorhttps://orcid.org/0009-0000-8632-0491
dcterms.creatorhttps://orcid.org/0000-0002-9900-1972
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2405.02228v2.pdf
Size:
4.16 MB
Format:
Adobe Portable Document Format