SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
| dc.contributor.author | Navaneet, KL | |
| dc.contributor.author | Koohpayegani, Soroush Abbasi | |
| dc.contributor.author | Sleiman, Essam | |
| dc.contributor.author | Pirsiavash, Hamed | |
| dc.date.accessioned | 2023-11-09T17:46:37Z | |
| dc.date.available | 2023-11-09T17:46:37Z | |
| dc.date.issued | 2023-10-04 | |
| dc.description.abstract | Recently, there has been a lot of progress in reducing the computation of deep models at inference time. These methods can reduce both the computational needs and power usage of deep models. Some of these approaches adaptively scale the compute based on the input instance. We show that such models can be vulnerable to a universal adversarial patch attack, where the attacker optimizes for a patch that when pasted on any image, can increase the compute and power consumption of the model. We run experiments with three different efficient vision transformer methods showing that in some cases, the attacker can increase the computation to the maximum possible level by simply pasting a patch that occupies only 8% of the image area. We also show that a standard adversarial training defense method can reduce some of the attack's success. We believe adaptive efficient methods will be necessary for the future to lower the power usage of deep models, so we hope our paper encourages the community to study the robustness of these methods and develop better defense methods for the proposed attack. | en_US |
| dc.description.sponsorship | This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR00112190135 and funding from NSF grant 1845216. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. | en_US |
| dc.description.uri | https://arxiv.org/abs/2310.02544 | en_US |
| dc.format.extent | 18 pages | en_US |
| dc.genre | journal articles | en_US |
| dc.genre | preprints | en_US |
| dc.identifier | doi:10.13016/m2peuy-x3cx | |
| dc.identifier.uri | https://doi.org/10.48550/arXiv.2310.02544 | |
| dc.identifier.uri | http://hdl.handle.net/11603/30638 | |
| dc.language.iso | en_US | en_US |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | en_US |
| dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0 DEED) | * |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | * |
| dc.title | SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers | en_US |
| dc.type | Text | en_US |
