Boosting Self-Supervised Learning via Knowledge Transfer

dc.contributor.authorNoroozi, Mehdi
dc.contributor.authorVinjimoor, Ananth
dc.contributor.authorFavaro, Paolo
dc.contributor.authorPirsiavash, Hamed
dc.date.accessioned2019-07-03T14:28:50Z
dc.date.available2019-07-03T14:28:50Z
dc.date.issued2018-12-17
dc.description2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
dc.description.abstractIn self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models including handcrafted features; 2) show that deeper neural network models can learn better representations from the same pretext task; 3) transfer knowledge learned with a deep model to a shallower one and thus boost its learning. We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin. Our learned features shrink the mAP gap between models trained via self-supervised learning and supervised learning from 5.9% to 2.6% in object detection on PASCAL VOC 2007.en_US
dc.description.sponsorshipPF has been supported by the Swiss National Science Foundation (SNSF) grant number 200021 169622. HP has been supported by GE Research and Verisk Analytics.en_US
dc.description.urihttps://ieeexplore.ieee.org/document/8579073en_US
dc.format.extent9 pagesen_US
dc.genreconference papers and proceedings preprintsen_US
dc.identifierdoi:10.13016/m24lmq-2ssa
dc.identifier.citationMehdi Noroozi, et.al, Boosting Self-Supervised Learning via Knowledge Transfer, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, DOI: 10.1109/CVPR.2018.00975en_US
dc.identifier.urihttps://doi.org/10.1109/CVPR.2018.00975
dc.identifier.urihttp://hdl.handle.net/11603/14336
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rights© 2018 IEEE
dc.subjectlearning (artificial intelligence)en_US
dc.subjectneural netsen_US
dc.subjectobject detectionen_US
dc.subjectknowledge transferen_US
dc.subjectself-supervised learningen_US
dc.subjectincompatible modelsen_US
dc.subjecttarget domainen_US
dc.subjecteffective transfer strategyen_US
dc.subjectdeeper neural network modelsen_US
dc.subjectnovel self-supervised tasken_US
dc.titleBoosting Self-Supervised Learning via Knowledge Transferen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
1805.00385.pdf
Size:
2.12 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: