A Fistful of Words: Learning Transferable Visual Models from Bag-of-Words Supervision
dc.contributor.author | Tejankar, Ajinkya | |
dc.contributor.author | Sanjabi, Maziar | |
dc.contributor.author | Wu, Bichen | |
dc.contributor.author | Xie, Saining | |
dc.contributor.author | Khabsa, Madian | |
dc.contributor.author | Pirsiavash, Hamed | |
dc.contributor.author | Firooz, Hamed | |
dc.date.accessioned | 2022-11-14T15:48:04Z | |
dc.date.available | 2022-11-14T15:48:04Z | |
dc.date.issued | 2022-01-06 | |
dc.description.abstract | Using natural language as a supervision for training visual recognition models holds great promise. Recent works have shown that if such supervision is used in the form of alignment between images and captions in large training datasets, then the resulting aligned models perform well on zero-shot classification as downstream tasks2 . In this paper, we focus on teasing out what parts of the language supervision are essential for training zero-shot image classification models. Through extensive and careful experiments, we show that: 1) A simple Bag-of-Words (BoW) caption could be used as a replacement for most of the image captions in the dataset. Surprisingly, we observe that this approach improves the zero-shot classification performance when combined with word balancing. 2) Using a BoW pretrained model, we can obtain more training data by generating pseudo-BoW captions on images that do not have a caption. Models trained on images with real and pseudo-BoW captions achieve stronger zero-shot performance. On ImageNet-1k zero-shot evaluation, our best model, that uses only 3M image-caption pairs, performs on-par with a CLIP model trained on 15M image-caption pairs (31.5% vs 31.3%) | en_US |
dc.description.uri | https://arxiv.org/abs/2112.13884 | en_US |
dc.format.extent | 16 pages | en_US |
dc.genre | journal articles | en_US |
dc.genre | preprints | en_US |
dc.identifier | doi:10.13016/m2tcx0-pser | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2112.13884 | |
dc.identifier.uri | http://hdl.handle.net/11603/26319 | |
dc.language.iso | en_US | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | en_US |
dc.title | A Fistful of Words: Learning Transferable Visual Models from Bag-of-Words Supervision | en_US |
dc.type | Text | en_US |