Semantically Distributed Robust Optimization for Vision-and-Language Inference

Department

Program

Citation of Original Publication

Gokhale, Tejas, Abhishek Chaudhary, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. “Semantically Distributed Robust Optimization for Vision-and-Language Inference.” Edited by Smaranda Muresan, Preslav Nakov, and Aline Villavicencio. Findings of the Association for Computational Linguistics: ACL 2022, May 2022, 1493–1513. https://doi.org/10.18653/v1/2022.findings-acl.118.

Rights

Attribution 4.0 International

Subjects

Abstract

Analysis of vision-and-language models has revealed their brittleness under linguistic phenomena such as paraphrasing, negation, textual entailment, and word substitutions with synonyms or antonyms. While data augmentation techniques have been designed to mitigate against these failure modes, methods that can integrate this knowledge into the training pipeline remain under-explored. In this paper, we present SDRO, a model-agnostic method that utilizes a set linguistic transformations in a distributed robust optimization setting, along with an ensembling technique to leverage these transformations during inference.Experiments on benchmark datasets with images (NLVR²) and video (VIOLIN) demonstrate performance improvements as well as robustness to adversarial attacks.Experiments on binary VQA explore the generalizability of this method to other V&L tasks.