Pourmehrani, HosseinBahrami, JavadNooralinejad, ParsaPirsiavash, HamedKarimi, Naghmeh2024-12-112024-12-112024-11-06http://hdl.handle.net/11603/37026Conference: IEEE International Test Conference (ITC)At: San Diego, CA, USAMachine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcomes. An adversary may force misclassification in the model's outcome by inducing a number of bit-flips in the targeted locations; thus declining the accuracy. To fill the gap, this paper presents FAT-RABBIT, a cost-effective mechanism designed to mitigate such threats by training the model such that there would be few weights that can be highly impactful in the outcome; thus reducing the sensitivity of the model to the fault injection attacks. Moreover, to increase robustness against bit-wise large perturbations, we propose an optimization scheme so-called M-SAM. We then augment FAT-RABBIT with the M-SAM optimizer to further bolster model accuracy against bit-flipping fault attacks. Notably, these approaches incur no additional hardware overhead. Our experimental results demonstrate the robustness of FAT-RABBIT and its augmented version, called Augmented FAT-RABBIT, against such attacks.5 pagesen-USThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.UMBC Cybersecurity InstituteFAT-RABBIT: Fault-Aware Training towards Robustness Against Bit-flip Based Attacks in Deep Neural NetworksText