ISD: Self-Supervised Learning by Iterative Similarity Distillation

dc.contributor.authorTejankar, Ajinkya
dc.contributor.authorKoohpayegani, Soroush Abbasi
dc.contributor.authorS.P., Vipin
dc.contributor.authorFavaro, Paolo
dc.contributor.authorPirsiavash, Hamed
dc.date.accessioned2021-10-13T18:33:18Z
dc.date.available2021-10-13T18:33:18Z
dc.date.issued2021-09-10
dc.descriptionInternational Conference on Computer Vision (ICCV) 2021en_US
dc.description.abstractRecently, contrastive learning has achieved great results in self-supervised learning, where the main idea is to push two augmentations of an image (positive pairs) closer compared to other random images (negative pairs). We argue that not all random images are equal. Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs. We iteratively distill a slowly evolving teacher model to the student model by capturing the similarity of a query image to some random images and transferring that knowledge to the student. We argue that our method is less constrained compared to recent contrastive learning methods, so it can learn better features. Specifically, our method should handle unbalanced and unlabeled data better than existing contrastive learning methods, because the randomly chosen negative set might include many samples that are semantically similar to the query image. In this case, our method labels them as highly similar while standard contrastive methods label them as negative pairs. Our method achieves comparable results to the state-of-the-art models. We also show that our method performs better in the settings where the unlabeled data is unbalanced. Our code is available here: this https URL.en_US
dc.description.sponsorshipThis material is based upon work partially supported by the United States Air Force under Contract No. FA8750-19-C-0098, funding from SAP SE, and also NSF grant numbers 1845216 and 1920079. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force, DARPA, or other funding agencies.en_US
dc.description.urihttps://arxiv.org/abs/2012.09259en_US
dc.description.urihttps://github.com/UMBCvision/ISD
dc.format.extent12 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m28gok-qyrx
dc.identifier.urihttp://hdl.handle.net/11603/23090
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.titleISD: Self-Supervised Learning by Iterative Similarity Distillationen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0001-7550-5140en_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2012.09259.pdf
Size:
3.87 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: