Mean Shift for Self-Supervised Learning
Loading...
Author/Creator
Author/Creator ORCID
Date
2022-02-28
Type of Work
Department
Program
Citation of Original Publication
S. A. Koohpayegani, A. Tejankar and H. Pirsiavash, "Mean Shift for Self-Supervised Learning," 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10306-10315, doi: 10.1109/ICCV48922.2021.01016.
Rights
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects
Abstract
Most recent self-supervised learning (SSL) algorithms learn features by contrasting between instances of images or by clustering the images and then contrasting between the image clusters. We introduce a simple mean-shift algorithm that learns representations by grouping images together without contrasting between them or adopting much of prior on the structure of the clusters. We simply “shift” the embedding of each image to be close to the “mean” of its neighbors. Since in our setting, the closest neighbor is always another augmentation of the same image, our model will be identical to BYOL when using only one nearest neighbor instead of 5 as used in our experiments. Our model achieves 72.4% on ImageNet linear evaluation with
ResNet50 at 200 epochs outperforming BYOL.