FAT-RABBIT: Fault-Aware Training towards Robustness Against Bit-flip Based Attacks in Deep Neural Networks

dc.contributor.authorPourmehrani, Hossein
dc.contributor.authorBahrami, Javad
dc.contributor.authorNooralinejad, Parsa
dc.contributor.authorPirsiavash, Hamed
dc.contributor.authorKarimi, Naghmeh
dc.date.accessioned2024-12-11T17:02:05Z
dc.date.available2024-12-11T17:02:05Z
dc.date.issued2024-11-06
dc.descriptionConference: IEEE International Test Conference (ITC)At: San Diego, CA, USA
dc.description.abstractMachine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcomes. An adversary may force misclassification in the model's outcome by inducing a number of bit-flips in the targeted locations; thus declining the accuracy. To fill the gap, this paper presents FAT-RABBIT, a cost-effective mechanism designed to mitigate such threats by training the model such that there would be few weights that can be highly impactful in the outcome; thus reducing the sensitivity of the model to the fault injection attacks. Moreover, to increase robustness against bit-wise large perturbations, we propose an optimization scheme so-called M-SAM. We then augment FAT-RABBIT with the M-SAM optimizer to further bolster model accuracy against bit-flipping fault attacks. Notably, these approaches incur no additional hardware overhead. Our experimental results demonstrate the robustness of FAT-RABBIT and its augmented version, called Augmented FAT-RABBIT, against such attacks.
dc.format.extent5 pages
dc.genreconference papers and proceedings
dc.genrepreprints
dc.identifierdoi:10.13016/m26vpx-pqky
dc.identifier.urihttp://hdl.handle.net/11603/37026
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectUMBC Cybersecurity Institute
dc.titleFAT-RABBIT: Fault-Aware Training towards Robustness Against Bit-flip Based Attacks in Deep Neural Networks
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5825-6637

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
FATRABBITFaultAwareTrainingtowardsRobustnessAgainstBitflipBasedAttacksinDeepNeuralNetworks.pdf
Size:
796.55 KB
Format:
Adobe Portable Document Format