SimA: Simple Softmax-free Attention for Vision Transformers

dc.contributor.authorKoohpayegani, Soroush Abbasi
dc.contributor.authorPirsiavash, Hamed
dc.date.accessioned2022-11-14T15:40:06Z
dc.date.available2022-11-14T15:40:06Z
dc.date.issued2022-06-17
dc.description.abstractRecently, vision transformers have become very popular. However, deploying them in many applications is computationally expensive partly due to the Softmax layer in the attention block. We introduce a simple but effective, Softmax-free attention block, SimA, which normalizes query and key matrices with simple `1-norm instead of using Softmax layer. Then, the attention block in SimA is a simple multiplication of three matrices, so SimA can dynamically change the ordering of the computation at the test time to achieve linear computation on the number of tokens or the number of channels. We empirically show that SimA applied to three SOTA variations of transformers, DeiT, XCiT, and CvT, results in on-par accuracy compared to the SOTA models, without any need for Softmax layer. Interestingly, changing SimA from multi-head to single-head has only a small effect on the accuracy, which simplifies the attention block further. The code is available here: https://github.com/UCDvision/simaen_US
dc.description.sponsorshipThis material is based upon work partially supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR00112190135, the United States Air Force under Contract No. FA8750-19-C-0098, funding from SAP SE, and NSF grants 1845216 and 1920079. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force, DARPA, or other funding agencies. Moreover, we would like to thank K L Navaneet, Vipin Pillai, and Kossar Pourahmadi for the valuable discussions and proof-reading the paper.en_US
dc.description.urihttps://arxiv.org/abs/2206.08898en_US
dc.format.extent15 pagesen_US
dc.genrejournal articlesen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2xnj5-zdko
dc.identifier.urihttps://doi.org/10.48550/arXiv.2206.08898
dc.identifier.urihttp://hdl.handle.net/11603/26315
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.titleSimA: Simple Softmax-free Attention for Vision Transformersen_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2206.08898.pdf
Size:
4.43 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: