Weakly Supervised Relative Spatial Reasoning for Visual Question Answering

dc.contributor.authorBanerjee, Pratyay
dc.contributor.authorGokhale, Tejas
dc.contributor.authorYang, Yezhou
dc.contributor.authorBaral, Chitta
dc.date.accessioned2025-06-05T14:03:20Z
dc.date.available2025-06-05T14:03:20Z
dc.date.issued2021-09-04
dc.description 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
dc.description.abstractVision-and-language (V&L) reasoning necessitates perception of visual concepts such as objects and actions, understanding semantics and language grounding, and reasoning about the interplay between the two modalities. One crucial aspect of visual reasoning is spatial understanding, which involves understanding relative locations of objects, i.e. implicitly learning the geometry of the scene. In this work, we evaluate the faithfulness of V&L models to such geometric understanding, by formulating the prediction of pair-wise relative locations of objects as a classification as well as a regression task. Our findings suggest that state-of-the-art transformer-based V&L models lack sufficient abilities to excel at this task. Motivated by this, we design two objectives as proxies for 3D spatial reasoning (SR) – object centroid estimation, and relative position estimation, and train V&L with weak supervision from off-the-shelf depth estimators. This leads to considerable improvements in accuracy for the "GQA" visual question answering challenge (in fully supervised, few-shot, and O.O.D settings) as well as improvements in relative spatial reasoning. Code and data will be released here.
dc.description.sponsorshipThe authors acknowledge support from NSF grants #1750082 and #1816039, DARPA SAIL-ON program #W911NF2020006, and ONR award #N00014-20-1-2332.
dc.description.urihttps://ieeexplore.ieee.org/document/9711054/?arnumber=9711054
dc.format.extent11 pages
dc.genreconference papers and proceedings
dc.genrepostprints
dc.identifierdoi:10.13016/m2srqu-tgc4
dc.identifier.citationBanerjee, Pratyay, Tejas Gokhale, Yezhou Yang, and Chitta Baral. “Weakly Supervised Relative Spatial Reasoning for Visual Question Answering.” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, 1888–98. https://doi.org/10.1109/ICCV48922.2021.00192.
dc.identifier.urihttps://doi.org/10.1109/ICCV48922.2021.00192
dc.identifier.urihttp://hdl.handle.net/11603/38691
dc.language.isoen_US
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.rights© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
dc.subjectGrounding
dc.subjectThree-dimensional displays
dc.subjectGeometry
dc.subjectVision + language
dc.subjectEstimation
dc.subjectPredictive models
dc.subjectSemantics
dc.subjectVisual reasoning and logical representation
dc.subjectVisualization
dc.titleWeakly Supervised Relative Spatial Reasoning for Visual Question Answering
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5593-2804

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
210901934v1.pdf
Size:
1.38 MB
Format:
Adobe Portable Document Format