Adaptive Token Sampling For Efficient Vision Transformers

dc.contributor.authorFayyaz, Mohsen
dc.contributor.authorKoohpayegani, Soroush Abbasi
dc.contributor.authorJafari, Farnoush Rezaei
dc.contributor.authorSengupta, Sunando
dc.contributor.authorJoze, Hamid Reza Vaezi
dc.contributor.authorSommerlade, Eric
dc.contributor.authorPirsiavash, Hamed
dc.contributor.authorGall, Juergen
dc.date.accessioned2022-11-14T15:51:35Z
dc.date.available2022-11-14T15:51:35Z
dc.date.issued2022-11-03
dc.descriptionComputer Vision – ECCV 2022, 17th European Conference; Tel Aviv, Israel; October 23–27, 2022
dc.description.abstractWhile state-of-the-art vision transformer models achieve promising results in image classification, they are computationally expensive and require many GFLOPs. Although the GFLOPs of a vision transformer can be decreased by reducing the number of tokens in the network, there is no setting that is optimal for all input images. In this work, we therefore introduce a differentiable parameter-free Adaptive Token Sampler (ATS) module, which can be plugged into any existing vision transformer architecture. ATS empowers vision transformers by scoring and adaptively sampling significant tokens. As a result, the number of tokens is not constant anymore and varies for each input image. By integrating ATS as an additional layer within the current transformer blocks, we can convert them into much more efficient vision transformers with an adaptive number of tokens. Since ATS is a parameter-free module, it can be added to the off-the-shelf pre-trained vision transformers as a plug and play module, thus reducing their GFLOPs without any additional training. Moreover, due to its differentiable design, one can also train a vision transformer equipped with ATS. We evaluate the efficiency of our module in both image and video classification tasks by adding it to multiple SOTA vision transformers. Our proposed module improves the SOTA by reducing their computational costs (GFLOPs) by 2 ×, while preserving their accuracy on the ImageNet, Kinetics-400, and Kinetics-600 datasets. The code is available at https://adaptivetokensampling.github.io/ .en_US
dc.description.sponsorshipFarnoush Rezaei Jafari acknowledges support by the Federal Ministry of Education and Research (BMBF) for the Berlin Institute for the Foundations of Learning and Data (BIFOLD) (01IS18037A). Juergen Gall has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2070 - 390732324, GA1927/4-2 (FOR 2535 Anticipating Human Behavior), and the ERC Consolidator Grant FORHUE (101044724).en_US
dc.description.urihttps://link.springer.com/chapter/10.1007/978-3-031-20083-0_24en_US
dc.format.extent28 pagesen_US
dc.genrebook chaptersen_US
dc.genreconference papers and proceedingsen_US
dc.genrepostprints
dc.identifierdoi:10.13016/m2ifce-mfug
dc.identifier.citationFayyaz, Mohsen, Soroush Abbasi Koohpayegani, Farnoush Rezaei Jafari, Sunando Sengupta, Hamid Reza Vaezi Joze, Eric Sommerlade, Hamed Pirsiavash, and Jürgen Gall. “Adaptive Token Sampling for Efficient Vision Transformers.” In Computer Vision – ECCV 2022, edited by Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, 396–414. Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2022. https://doi.org/10.1007/978-3-031-20083-0_24.
dc.identifier.urihttps://doi.org/10.1007/978-3-031-20083-0_24
dc.identifier.urihttp://hdl.handle.net/11603/26321
dc.language.isoen_USen_US
dc.publisherSpringer
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.titleAdaptive Token Sampling For Efficient Vision Transformersen_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2111.15667.pdf
Size:
34.2 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: