WeaQA: Weak Supervision via Captions for Visual Question Answering
Links to Files
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Banerjee, Pratyay, Tejas Gokhale, Yezhou Yang, and Chitta Baral. “WeaQA: Weak Supervision via Captions for Visual Question Answering.” Edited by Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, August 2021, 3420–35. https://doi.org/10.18653/v1/2021.findings-acl.302.
Rights
Attribution 4.0 International
Subjects
Abstract
Methodologies for training visual question answering (VQA) models assume the availability of datasets with human-annotated ImageQuestion-Answer (I-Q-A) triplets. This has led to heavy reliance on datasets and a lack of generalization to new types of questions and scenes. Linguistic priors along with biases and errors due to annotator subjectivity have been shown to percolate into VQA models trained on such samples. We study whether models can be trained without any human-annotated Q-A pairs, but only with images and their associated textual descriptions or captions. We present a method to train models with synthetic Q-A pairs generated procedurally from captions. Additionally, we demonstrate the efficacy of spatial-pyramid image patches as a simple but effective alternative to dense and costly object bounding box annotations used in existing VQA models. Our experiments on three VQA benchmarks demonstrate the efficacy of this weakly-supervised approach, especially on the VQA-CP challenge, which tests performance under changing linguistic priors.
