ISD: Self-Supervised Learning by Iterative Similarity Distillation
dc.contributor.author | Tejankar, Ajinkya | |
dc.contributor.author | Koohpayegani, Soroush Abbasi | |
dc.contributor.author | S.P., Vipin | |
dc.contributor.author | Favaro, Paolo | |
dc.contributor.author | Pirsiavash, Hamed | |
dc.date.accessioned | 2021-10-13T18:33:18Z | |
dc.date.available | 2021-10-13T18:33:18Z | |
dc.date.issued | 2021-09-10 | |
dc.description | International Conference on Computer Vision (ICCV) 2021 | en_US |
dc.description.abstract | Recently, contrastive learning has achieved great results in self-supervised learning, where the main idea is to push two augmentations of an image (positive pairs) closer compared to other random images (negative pairs). We argue that not all random images are equal. Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs. We iteratively distill a slowly evolving teacher model to the student model by capturing the similarity of a query image to some random images and transferring that knowledge to the student. We argue that our method is less constrained compared to recent contrastive learning methods, so it can learn better features. Specifically, our method should handle unbalanced and unlabeled data better than existing contrastive learning methods, because the randomly chosen negative set might include many samples that are semantically similar to the query image. In this case, our method labels them as highly similar while standard contrastive methods label them as negative pairs. Our method achieves comparable results to the state-of-the-art models. We also show that our method performs better in the settings where the unlabeled data is unbalanced. Our code is available here: this https URL. | en_US |
dc.description.sponsorship | This material is based upon work partially supported by the United States Air Force under Contract No. FA8750-19-C-0098, funding from SAP SE, and also NSF grant numbers 1845216 and 1920079. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force, DARPA, or other funding agencies. | en_US |
dc.description.uri | https://arxiv.org/abs/2012.09259 | en_US |
dc.description.uri | https://github.com/UMBCvision/ISD | |
dc.format.extent | 12 pages | en_US |
dc.genre | conference papers and proceedings | en_US |
dc.genre | preprints | en_US |
dc.identifier | doi:10.13016/m28gok-qyrx | |
dc.identifier.uri | http://hdl.handle.net/11603/23090 | |
dc.language.iso | en_US | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | * |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | * |
dc.title | ISD: Self-Supervised Learning by Iterative Similarity Distillation | en_US |
dc.type | Text | en_US |
dcterms.creator | https://orcid.org/0000-0001-7550-5140 | en_US |