Consistent Explanations by Contrastive Learning

dc.contributor.authorPillai, Vipin
dc.contributor.authorKoohpayegani, Soroush Abbasi
dc.contributor.authorOuligian, Ashley
dc.contributor.authorFong, Dennis
dc.contributor.authorPirsiavash, Hamed
dc.date.accessioned2021-11-04T15:53:06Z
dc.date.available2021-11-04T15:53:06Z
dc.date.issued2021-10-01
dc.description2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
dc.description.abstractUnderstanding and explaining the decisions of neural networks are critical to building trust, rather than relying on them as black box algorithms. Post-hoc evaluation techniques, such as Grad-CAM, enable humans to inspect the spatial regions responsible for a particular network decision. However, it is shown that such explanations are not always consistent with human priors, such as consistency across image transformations. Given an interpretation algorithm, e.g., Grad-CAM, we introduce a novel training method to train the model to produce more consistent explanations. Since obtaining the ground truth for a desired model interpretation is not a well-defined task, we adopt ideas from contrastive self-supervised learning and apply them to the interpretations of the model rather than its embeddings. Explicitly training the network to produce more reasonable interpretations and subsequently evaluating those interpretations will enhance our ability to trust the network. We show that our method, Contrastive Grad-CAM Consistency (CGC), results in Grad-CAM interpretation heatmaps that are consistent with human annotations while still achieving comparable classification accuracy. Moreover, since our method can be seen as a form of regularizer, on limited-data fine-grained classification settings, our method outperforms the baseline classification accuracy on Caltech-Birds, Stanford Cars, VGG Flowers, and FGVC-Aircraft datasets. In addition, because our method does not rely on annotations, it allows for the incorporation of unlabeled data into training, which enables better generalization of the model. Our code is publicly available.en_US
dc.description.sponsorship: This material is based upon work partially supported by the United States Air Force under Contract No. FA8750-19-C-0098, U.S. Department of Commerce, National Institute of Standards and Technology under award number 60NANB18D279, NSF grant numbers 1845216 and 1920079, and funding from Northrop Grumman and SAP SE. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force, DARPA, or other funding agenciesen_US
dc.description.urihttps://www.computer.org/csdl/proceedings-article/cvpr/2022/694600k0203/1H1kRR6oMx2en_US
dc.format.extent11 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2qwcu-fj3k
dc.identifier.citationV. Pillai, S. Koohpayegani, A. Ouligian, D. Fong and H. Pirsiavash, "Consistent Explanations by Contrastive Learning," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022 pp. 10203-10212. doi: 10.1109/CVPR52688.2022.00997
dc.identifier.urihttp://hdl.handle.net/11603/23218
dc.identifier.urihttps://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.00997
dc.language.isoen_USen_US
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rights© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.titleConsistent Explanations by Contrastive Learningen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2110.00527.pdf
Size:
3.95 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: