Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles

dc.contributor.authorElhenawy, Mohammed
dc.contributor.authorJaradat, Shadi
dc.contributor.authorAlhadidi, Taqwa I.
dc.contributor.authorAshqar, Huthaifa
dc.contributor.authorJaber, Ahmed
dc.contributor.authorRakotonirainy, Andry
dc.contributor.authorTami, Mohammad Abu
dc.date.accessioned2025-10-16T15:27:12Z
dc.date.issued2025-09-08
dc.description2025 IEEE 4th International Conference on Computing and Machine Intelligence (ICMI), April 5-6, 2025, Mount Pleasant, MI, USA
dc.description.abstractScene understanding is critical for various downstream tasks in autonomous driving, including facilitating driver-agent communication and enhancing human-centered explainability of autonomous vehicle (AV) decisions. This paper evaluates the capability of four multimodal large language models (MLLMs), including relatively small models, to understand scenes in a zero-shot, in-context learning setting. Additionally, we explore whether combining these models using an ensemble approach with majority voting can enhance scene understanding performance. Our experiments demonstrate that GPT-4o, the largest model, outperforms the others in scene understanding. However, the performance gap between GPT-4o and the smaller models is relatively modest, suggesting that advanced techniques such as improved in-context learning, retrieval-augmented generation (RAG), or fine-tuning could further optimize the smaller models' performance. We also observe mixed results with the ensemble approach: while some scene attributes show improvement in performance metrics such as F1-score, others experience a decline. These findings highlight the need for more sophisticated ensemble techniques to achieve consistent gains across all scene attributes. This study underscores the potential of leveraging MLLMs for scene understanding and provides insights into optimizing their performance for autonomous driving applications.
dc.description.sponsorshipThis research was funded partially by the Australian Government through the Australian Research Council Discovery Project DP220102598.
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/11139833
dc.format.extent7 pages
dc.genreconference papers and proceedings
dc.genrepreprints
dc.identifierdoi:10.13016/m2eg4q-yj9b
dc.identifier.citationElhenawy, Mohammed, Shadi Jaradat, Taqwa I. Alhadidi, et al. “Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles.” 2025 IEEE 4th International Conference on Computing and Machine Intelligence (ICMI), April 2025, 1–5. https://doi.org/10.1109/ICMI65310.2025.11139833.
dc.identifier.urihttps://doi.org/10.1109/ICMI65310.2025.11139833
dc.identifier.urihttp://hdl.handle.net/11603/40450
dc.language.isoen
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Data Science
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectComputer Science - Computer Vision and Pattern Recognition
dc.subjectComputer Science - Computation and Language
dc.titleZero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-6835-8338

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2506.12232v1.pdf
Size:
679.84 KB
Format:
Adobe Portable Document Format