Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles
| dc.contributor.author | Elhenawy, Mohammed | |
| dc.contributor.author | Jaradat, Shadi | |
| dc.contributor.author | Alhadidi, Taqwa I. | |
| dc.contributor.author | Ashqar, Huthaifa | |
| dc.contributor.author | Jaber, Ahmed | |
| dc.contributor.author | Rakotonirainy, Andry | |
| dc.contributor.author | Tami, Mohammad Abu | |
| dc.date.accessioned | 2025-10-16T15:27:12Z | |
| dc.date.issued | 2025-09-08 | |
| dc.description | 2025 IEEE 4th International Conference on Computing and Machine Intelligence (ICMI), April 5-6, 2025, Mount Pleasant, MI, USA | |
| dc.description.abstract | Scene understanding is critical for various downstream tasks in autonomous driving, including facilitating driver-agent communication and enhancing human-centered explainability of autonomous vehicle (AV) decisions. This paper evaluates the capability of four multimodal large language models (MLLMs), including relatively small models, to understand scenes in a zero-shot, in-context learning setting. Additionally, we explore whether combining these models using an ensemble approach with majority voting can enhance scene understanding performance. Our experiments demonstrate that GPT-4o, the largest model, outperforms the others in scene understanding. However, the performance gap between GPT-4o and the smaller models is relatively modest, suggesting that advanced techniques such as improved in-context learning, retrieval-augmented generation (RAG), or fine-tuning could further optimize the smaller models' performance. We also observe mixed results with the ensemble approach: while some scene attributes show improvement in performance metrics such as F1-score, others experience a decline. These findings highlight the need for more sophisticated ensemble techniques to achieve consistent gains across all scene attributes. This study underscores the potential of leveraging MLLMs for scene understanding and provides insights into optimizing their performance for autonomous driving applications. | |
| dc.description.sponsorship | This research was funded partially by the Australian Government through the Australian Research Council Discovery Project DP220102598. | |
| dc.description.uri | https://ieeexplore.ieee.org/abstract/document/11139833 | |
| dc.format.extent | 7 pages | |
| dc.genre | conference papers and proceedings | |
| dc.genre | preprints | |
| dc.identifier | doi:10.13016/m2eg4q-yj9b | |
| dc.identifier.citation | Elhenawy, Mohammed, Shadi Jaradat, Taqwa I. Alhadidi, et al. “Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles.” 2025 IEEE 4th International Conference on Computing and Machine Intelligence (ICMI), April 2025, 1–5. https://doi.org/10.1109/ICMI65310.2025.11139833. | |
| dc.identifier.uri | https://doi.org/10.1109/ICMI65310.2025.11139833 | |
| dc.identifier.uri | http://hdl.handle.net/11603/40450 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Data Science | |
| dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |
| dc.subject | Computer Science - Computer Vision and Pattern Recognition | |
| dc.subject | Computer Science - Computation and Language | |
| dc.title | Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0002-6835-8338 |
Files
Original bundle
1 - 1 of 1
