FAT-RABBIT: Fault-Aware Training towards Robustness Against Bit-flip Based Attacks in Deep Neural Networks
Loading...
Links to Files
Permanent Link
Author/Creator ORCID
Date
2024-11-06
Type of Work
Department
Program
Citation of Original Publication
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Subjects
Abstract
Machine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcomes. An adversary may force misclassification in the model's outcome by inducing a number of bit-flips in the targeted locations; thus declining the accuracy. To fill the gap, this paper presents FAT-RABBIT, a cost-effective mechanism designed to mitigate such threats by training the model such that there would be few weights that can be highly impactful in the outcome; thus reducing the sensitivity of the model to the fault injection attacks. Moreover, to increase robustness against bit-wise large perturbations, we propose an optimization scheme so-called M-SAM. We then augment FAT-RABBIT with the M-SAM optimizer to further bolster model accuracy against bit-flipping fault attacks. Notably, these approaches incur no additional hardware overhead. Our experimental results demonstrate the robustness of FAT-RABBIT and its augmented version, called Augmented FAT-RABBIT, against such attacks.