Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness

dc.contributor.authorGokhale, Tejas
dc.contributor.authorMishra, Swaroop
dc.contributor.authorLuo, Man
dc.contributor.authorSachdeva, Bhavdeep
dc.contributor.authorBaral, Chitta
dc.date.accessioned2024-02-27T22:51:15Z
dc.date.available2024-02-27T22:51:15Z
dc.date.issued2022-05
dc.descriptionFindings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland May 22-27, 2022
dc.description.abstractData modification, either via additional training datasets, data augmentation, debiasing, and dataset filtering, has been proposed as an effective solution for generalizing to out-of-domain (OOD) inputs, in both natural language processing and computer vision literature. However, the effect of data modification on adversarial robustness remains unclear. In this work, we conduct a comprehensive study of common data modification strategies and evaluate not only their in-domain and OOD performance, but also their adversarial robustness (AR).We also present results on a two-dimensional synthetic dataset to visualize the effect of each method on the training distribution. This work serves as an empirical study towards understanding the relationship between generalizing to unseen domains and defending against adversarial perturbations. Our findings suggest that more data (either via additional datasets or data augmentation) benefits both OOD accuracy and AR.However, data filtering (previously shown to improve OOD accuracy on natural language inference) hurts OOD accuracy on other tasks such as question answering and image classification. We provide insights from our experiments to inform future work in this direction.
dc.description.sponsorshipThis work was funded in part by DARPA SAIL-ON program (W911NF2020006) and DARPA CHESS program (FA875019C0003). The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.
dc.description.urihttps://aclanthology.org/2022.findings-acl.213/
dc.format.extent14 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m25o6a-jnu5
dc.identifier.citationTejas Gokhale, Swaroop Mishra, Man Luo, Bhavdeep Sachdeva, and Chitta Baral. 2022. Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2705–2718, Dublin, Ireland. Association for Computational Linguistics.
dc.identifier.urihttps://doi.org/10.18653/v1/2022.findings-acl.213
dc.identifier.urihttp://hdl.handle.net/11603/31728
dc.publisherACL
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsCC BY 4.0 DEED Attribution 4.0 Internationalen
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleGeneralized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5593-2804

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2022.findings-acl.213.pdf
Size:
598.51 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: