Improving Diversity with Adversarially Learned Transformations for Domain Generalization

dc.contributor.authorGokhale, Tejas
dc.contributor.authorAnirudh, Rushil
dc.contributor.authorThiagarajan, Jayaraman J.
dc.contributor.authorKailkhura, Bhavya
dc.contributor.authorBaral, Chitta
dc.contributor.authorYang, Yezhou
dc.date.accessioned2024-02-27T22:51:09Z
dc.date.available2024-02-27T22:51:09Z
dc.date.issued2023-02-06
dc.description2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA 02-07 January 2023
dc.description.abstractTo be successful in single source domain generalization (SSDG), maximizing diversity of synthesized domains has emerged as one of the most effective strategies. Recent success in SSDG comes from methods that pre-specify diversity inducing image augmentations during training, so that it may lead to better generalization on new domains. However, naïve pre-specified augmentations are not always effective, either because they cannot model large domain shift, or be-cause the specific choice of transforms may not cover the types of shift commonly occurring in domain generalization. To address this issue, we present a novel framework called ALT: adversarially learned transformations, that uses an adversary neural network to model plausible, yet hard image transformations that fool the classifier. ALT learns image transformations by randomly initializing the adversary net-work for each batch and optimizing it for a fixed number of steps to maximize classification error. The classifier is trained by enforcing a consistency between its predictions on the clean and transformed images. With extensive empirical analysis, we find that this new form of adversarial transformations achieves both objectives of diversity and hardness simultaneously, outperforming all existing techniques on competitive benchmarks for SSDG. We also show that ALT can seamlessly work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance. Code: https://github.com/tejas-gokhale/ALT
dc.description.sponsorshipThis work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DEAC52-07NA27344, Lawrence Livermore National Security, LLC. and was supported by the LDRD Program under project 22-ERD-006 with IM release number LLNL-JRNL836221. BK’s efforts were supported by 22-DR-009. TG, CB, and YY were supported by NSF RI grants #1816039 and #2132724.
dc.description.urihttps://ieeexplore.ieee.org/document/10030875
dc.format.extent10 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m2zuje-pras
dc.identifier.citationT. Gokhale, R. Anirudh, J. J. Thiagarajan, B. Kailkhura, C. Baral and Y. Yang, "Improving Diversity with Adversarially Learned Transformations for Domain Generalization," 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2023, pp. 434-443, doi: 10.1109/WACV56688.2023.00051.
dc.identifier.urihttps://doi.org/10.1109/WACV56688.2023.00051
dc.identifier.urihttp://hdl.handle.net/11603/31724
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
dc.rightsPublic Domain Mark 1.0en
dc.rights.urihttps://creativecommons.org/publicdomain/mark/1.0/
dc.titleImproving Diversity with Adversarially Learned Transformations for Domain Generalization
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5593-2804

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Improving_Diversity_with_Adversarially_Learned_Transformations_for_Domain_Generalization.pdf
Size:
1.56 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: