VOILA: Evaluation of MLLMs For Perceptual Understanding and Analogical Reasoning
| dc.contributor.author | Yilmaz, Nilay | |
| dc.contributor.author | Patel, Maitreya | |
| dc.contributor.author | Luo, Yiran Lawrence | |
| dc.contributor.author | Gokhale, Tejas | |
| dc.contributor.author | Baral, Chitta | |
| dc.contributor.author | Jayasuriya, Suren | |
| dc.contributor.author | Yang, Yezhou | |
| dc.date.accessioned | 2026-02-03T18:14:45Z | |
| dc.date.issued | 2025-03-04 | |
| dc.description | Thirteenth International Conference on Learning Representations,April 24 – 28, 2025, Singapore | |
| dc.description.abstract | Multimodal Large Language Models (MLLMs) have become a powerful tool for integrating visual and textual information. Despite their exceptional performance on visual understanding benchmarks, measuring their ability to reason abstractly across multiple images remains a significant challenge. To address this, we introduce VOILA, a large-scale, open-ended, dynamic benchmark designed to evaluate MLLMs' perceptual understanding and abstract relational reasoning. VOILA employs an analogical mapping approach in the visual domain, requiring models to generate an image that completes an analogy between two given image pairs, reference and application, without relying on predefined choices. Our experiments demonstrate that the analogical reasoning tasks in VOILA present a challenge to MLLMs. Through multi-step analysis, we reveal that current MLLMs struggle to comprehend inter-image relationships and exhibit limited capabilities in high-level relational reasoning. Notably, we observe that performance improves when following a multi-step strategy of least-to-most prompting. Comprehensive evaluations on open-source models and GPT-4o show that on text-based answers, the best accuracy for challenging scenarios is 13% (LLaMa 3.2) and even for simpler tasks is only 29% (GPT-4o), while human performance is significantly higher at 70% across both difficulty levels. | |
| dc.description.sponsorship | NY is supported by the Republic of Turkiye Ministry of National Education. MP, CB, and YY are supported by US NSF RI grant #2132724. TG was supported by the SURFF award from UMBC ORCA. We thank the NSF NAIRR initiative, the Research Computing (RC) at Arizona State University (ASU), and cr8dl.ai for their generous support in providing computing resources. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers. | |
| dc.description.uri | http://arxiv.org/abs/2503.00043 | |
| dc.format.extent | 25 pages | |
| dc.genre | conference papers and proceedings | |
| dc.genre | postprints | |
| dc.identifier | doi:10.13016/m2okcm-enuk | |
| dc.identifier.uri | https://doi.org/10.48550/arXiv.2503.00043 | |
| dc.identifier.uri | http://hdl.handle.net/11603/41659 | |
| dc.language.iso | en | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
| dc.rights | Attribution 4.0 International | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | Computer Science - Computation and Language | |
| dc.subject | Computer Science - Computer Vision and Pattern Recognition | |
| dc.subject | Computer Science - Artificial Intelligence | |
| dc.title | VOILA: Evaluation of MLLMs For Perceptual Understanding and Analogical Reasoning | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0002-5593-2804 |
Files
Original bundle
1 - 1 of 1
