SimA: Simple Softmax-free Attention for Vision Transformers
dc.contributor.author | Koohpayegani, Soroush Abbasi | |
dc.contributor.author | Pirsiavash, Hamed | |
dc.date.accessioned | 2022-11-14T15:40:06Z | |
dc.date.available | 2022-11-14T15:40:06Z | |
dc.date.issued | 2022-06-17 | |
dc.description.abstract | Recently, vision transformers have become very popular. However, deploying them in many applications is computationally expensive partly due to the Softmax layer in the attention block. We introduce a simple but effective, Softmax-free attention block, SimA, which normalizes query and key matrices with simple `1-norm instead of using Softmax layer. Then, the attention block in SimA is a simple multiplication of three matrices, so SimA can dynamically change the ordering of the computation at the test time to achieve linear computation on the number of tokens or the number of channels. We empirically show that SimA applied to three SOTA variations of transformers, DeiT, XCiT, and CvT, results in on-par accuracy compared to the SOTA models, without any need for Softmax layer. Interestingly, changing SimA from multi-head to single-head has only a small effect on the accuracy, which simplifies the attention block further. The code is available here: https://github.com/UCDvision/sima | en_US |
dc.description.sponsorship | This material is based upon work partially supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR00112190135, the United States Air Force under Contract No. FA8750-19-C-0098, funding from SAP SE, and NSF grants 1845216 and 1920079. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force, DARPA, or other funding agencies. Moreover, we would like to thank K L Navaneet, Vipin Pillai, and Kossar Pourahmadi for the valuable discussions and proof-reading the paper. | en_US |
dc.description.uri | https://arxiv.org/abs/2206.08898 | en_US |
dc.format.extent | 15 pages | en_US |
dc.genre | journal articles | en_US |
dc.genre | preprints | en_US |
dc.identifier | doi:10.13016/m2xnj5-zdko | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2206.08898 | |
dc.identifier.uri | http://hdl.handle.net/11603/26315 | |
dc.language.iso | en_US | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | en_US |
dc.title | SimA: Simple Softmax-free Attention for Vision Transformers | en_US |
dc.type | Text | en_US |