Adaptive Token Sampling For Efficient Vision Transformers
Loading...
Author/Creator ORCID
Date
2022-11-03
Department
Program
Citation of Original Publication
Fayyaz, Mohsen, Soroush Abbasi Koohpayegani, Farnoush Rezaei Jafari, Sunando Sengupta, Hamid Reza Vaezi Joze, Eric Sommerlade, Hamed Pirsiavash, and Jürgen Gall. “Adaptive Token Sampling for Efficient Vision Transformers.” In Computer Vision – ECCV 2022, edited by Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, 396–414. Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2022. https://doi.org/10.1007/978-3-031-20083-0_24.
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Subjects
Abstract
While state-of-the-art vision transformer models achieve promising results in image classification, they are computationally expensive
and require many GFLOPs. Although the GFLOPs of a vision transformer can be decreased by reducing the number of tokens in the network,
there is no setting that is optimal for all input images. In this work, we
therefore introduce a differentiable parameter-free Adaptive Token Sampler (ATS) module, which can be plugged into any existing vision transformer architecture. ATS empowers vision transformers by scoring and
adaptively sampling significant tokens. As a result, the number of tokens
is not constant anymore and varies for each input image. By integrating
ATS as an additional layer within the current transformer blocks, we can
convert them into much more efficient vision transformers with an adaptive number of tokens. Since ATS is a parameter-free module, it can be
added to the off-the-shelf pre-trained vision transformers as a plug and
play module, thus reducing their GFLOPs without any additional training. Moreover, due to its differentiable design, one can also train a vision
transformer equipped with ATS. We evaluate the efficiency of our module in both image and video classification tasks by adding it to multiple
SOTA vision transformers. Our proposed module improves the SOTA by
reducing their computational costs (GFLOPs) by 2
×, while preserving
their accuracy on the ImageNet, Kinetics-400, and Kinetics-600 datasets.
The code is available at https://adaptivetokensampling.github.io/
.