LLM-ProS: Analyzing Large Language Models' Performance in Competitive Problem Solving

dc.contributor.authorHossain, Md Sifat
dc.contributor.authorTabassum, Anika
dc.contributor.authorArefin, Md Fahim
dc.contributor.authorZaman, Tarannum Shaila
dc.date.accessioned2025-04-01T14:55:51Z
dc.date.available2025-04-01T14:55:51Z
dc.date.issued2025-02-04
dc.descriptionLLM4Code 2025: The Second International Workshop on Large Language Models for Code
dc.description.abstractThe rapid advancement of large language models has opened new avenues for automating complex problem-solving tasks such as algorithmic coding and competitive programming. This paper introduces a novel evaluation technique, LLM-ProS, to assess the performance of state-of-the-art LLMs on International Collegiate Programming Contest (ICPC) problems. Using a curated dataset of 166 World Finals problems from 2011 to 2024, we benchmark the models' reasoning, accuracy, and efficiency. We evaluate the five models-GPT-4o, Mistral Large, Llama-3.1-405B, and the o1 family, consisting of o1-mini and o1-preview, across critical metrics like correctness, resource utilization, and response calibration. Our results reveal significant differences in the models' abilities to generalize, adapt, and solve novel problems. We also investigated the impact of training methodologies, dataset contamination, and chain-of-thought reasoning on model performance. The findings provide new insights into optimizing LLMs for algorithmic tasks, highlighting both strengths and limitations of current models.
dc.description.urihttp://arxiv.org/abs/2502.04355
dc.format.extent8 pages
dc.genreconference papers and proceedings
dc.genrepreprints
dc.identifierdoi:10.13016/m2mgeg-5ntq
dc.identifier.urihttps://doi.org/10.48550/arXiv.2502.04355
dc.identifier.urihttp://hdl.handle.net/11603/37936
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department
dc.relation.ispartofUMBC Faculty Collection
dc.rights?2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectICPC
dc.subjectChain-of-Thought Reasoning
dc.subjectArtificial Intelligence
dc.subjectCompetitive Programming
dc.subjectComputer Science
dc.subjectComputation and Language
dc.subjectLarge Language Models
dc.subjectPerformance Evaluation
dc.titleLLM-ProS: Analyzing Large Language Models' Performance in Competitive Problem Solving
dc.typeText

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
LLM_ProS.pdf
Size:
786.67 KB
Format:
Adobe Portable Document Format