Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges

dc.contributor.authorElhenawy, Mohammed
dc.contributor.authorAbutahoun, Ahmad
dc.contributor.authorAlhadidi, Taqwa I.
dc.contributor.authorJaber, Ahmed
dc.contributor.authorAshqar, Huthaifa
dc.contributor.authorJaradat, Shadi
dc.contributor.authorAbdelhay, Ahmed
dc.contributor.authorGlaser, Sebastien
dc.contributor.authorRakotonirainy, Andry
dc.date.accessioned2024-10-28T14:31:04Z
dc.date.available2024-10-28T14:31:04Z
dc.date.issued2024-06-26
dc.description.abstractMultimodal Large Language Models (MLLMs) harness comprehensive knowledge spanning text, images, and audio to adeptly tackle complex problems, including zero-shot in-context learning scenarios. This study explores the ability of MLLMs in visually solving the Traveling Salesman Problem (TSP) and Multiple Traveling Salesman Problem (mTSP) using images that portray point distributions on a two-dimensional plane. We introduce a novel approach employing multiple specialized agents within the MLLM framework, each dedicated to optimizing solutions for these combinatorial challenges. Our experimental investigation includes rigorous evaluations across zero-shot settings and introduces innovative multi-agent zero-shot in-context scenarios. The results demonstrated that both multi-agent models. Multi-Agent 1, which includes the Initializer, Critic, and Scorer agents, and Multi-Agent 2, which comprises only the Initializer and Critic agents; significantly improved solution quality for TSP and mTSP problems. Multi-Agent 1 excelled in environments requiring detailed route refinement and evaluation, providing a robust framework for sophisticated optimizations. In contrast, Multi-Agent 2, focusing on iterative refinements by the Initializer and Critic, proved effective for rapid decision-making scenarios. These experiments yield promising outcomes, showcasing the robust visual reasoning capabilities of MLLMs in addressing diverse combinatorial problems. The findings underscore the potential of MLLMs as powerful tools in computational optimization, offering insights that could inspire further advancements in this promising field. Project link: https://github.com/ahmed-abdulhuy/Solving-TSP-and-mTSP-Combinatorial-Challenges-using-Visual-Reasoning-and-Multi-Agent-Approach-MLLMs-.git
dc.description.urihttps://arxiv.org/abs/2407.00092v1
dc.format.extent28 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m29djw-jgkx
dc.identifier.urihttps://doi.org/10.48550/arXiv.2407.00092
dc.identifier.urihttp://hdl.handle.net/11603/36795
dc.language.isoen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Data Science
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleVisual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-6835-8338

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
make-06-00093-v2.pdf
Size:
5.35 MB
Format:
Adobe Portable Document Format