VOILA: Evaluation of MLLMs For Perceptual Understanding and Analogical Reasoning

dc.contributor.authorYilmaz, Nilay
dc.contributor.authorPatel, Maitreya
dc.contributor.authorLuo, Yiran Lawrence
dc.contributor.authorGokhale, Tejas
dc.contributor.authorBaral, Chitta
dc.contributor.authorJayasuriya, Suren
dc.contributor.authorYang, Yezhou
dc.date.accessioned2026-02-03T18:14:45Z
dc.date.issued2025-03-04
dc.descriptionThirteenth International Conference on Learning Representations,April 24 – 28, 2025, Singapore
dc.description.abstractMultimodal Large Language Models (MLLMs) have become a powerful tool for integrating visual and textual information. Despite their exceptional performance on visual understanding benchmarks, measuring their ability to reason abstractly across multiple images remains a significant challenge. To address this, we introduce VOILA, a large-scale, open-ended, dynamic benchmark designed to evaluate MLLMs' perceptual understanding and abstract relational reasoning. VOILA employs an analogical mapping approach in the visual domain, requiring models to generate an image that completes an analogy between two given image pairs, reference and application, without relying on predefined choices. Our experiments demonstrate that the analogical reasoning tasks in VOILA present a challenge to MLLMs. Through multi-step analysis, we reveal that current MLLMs struggle to comprehend inter-image relationships and exhibit limited capabilities in high-level relational reasoning. Notably, we observe that performance improves when following a multi-step strategy of least-to-most prompting. Comprehensive evaluations on open-source models and GPT-4o show that on text-based answers, the best accuracy for challenging scenarios is 13% (LLaMa 3.2) and even for simpler tasks is only 29% (GPT-4o), while human performance is significantly higher at 70% across both difficulty levels.
dc.description.sponsorshipNY is supported by the Republic of Turkiye Ministry of National Education. MP, CB, and YY are supported by US NSF RI grant #2132724. TG was supported by the SURFF award from UMBC ORCA. We thank the NSF NAIRR initiative, the Research Computing (RC) at Arizona State University (ASU), and cr8dl.ai for their generous support in providing computing resources. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.
dc.description.urihttp://arxiv.org/abs/2503.00043
dc.format.extent25 pages
dc.genreconference papers and proceedings
dc.genrepostprints
dc.identifierdoi:10.13016/m2okcm-enuk
dc.identifier.urihttps://doi.org/10.48550/arXiv.2503.00043
dc.identifier.urihttp://hdl.handle.net/11603/41659
dc.language.isoen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectComputer Science - Computation and Language
dc.subjectComputer Science - Computer Vision and Pattern Recognition
dc.subjectComputer Science - Artificial Intelligence
dc.titleVOILA: Evaluation of MLLMs For Perceptual Understanding and Analogical Reasoning
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5593-2804

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2503.00043v2.pdf
Size:
35.76 MB
Format:
Adobe Portable Document Format