Countering PUF Modeling Attacks through Adversarial Machine Learning

dc.contributor.authorEbrahimabadi, Mohammad
dc.contributor.authorLalouani, Wassila
dc.contributor.authorYounis, Mohamed
dc.contributor.authorKarimi, Naghmeh
dc.date.accessioned2021-06-29T20:12:38Z
dc.date.available2021-06-29T20:12:38Z
dc.date.issued2021-07
dc.descriptionIEEE Computer Society Annual Symposium on VLSI (ISVLSI)At: Tampa, Florida, USAen_US
dc.description.abstractA Physically Unclonable Function (PUF) is an effective option for device authentication, especially for IoT frameworks with resource-constrained devices. However, PUFs are vulnerable to modeling attacks which build a PUF model using a small subset of its Challenge-Response Pairs (CRPs). We propose an effective countermeasure against such an attack by employing adversarial machine learning techniques that introduce errors (poison) to the adversary’s model. The approach intermittently provides wrong response for the fed challenges. Coordination among the communicating parties is pursued to prevent the poisoned CRPs from causing the device authentication to fail. The experimental results extracted for a PUF implemented on FPGA demonstrate the efficacy of the proposed approach in thwarting modeling attacks. We also discuss the resiliency of the proposed scheme against impersonation and Sybil attacks.en_US
dc.format.extent6 pagesen_US
dc.genreconference papers and proceedings preprintsen_US
dc.identifierdoi:10.13016/m2ffsf-puj3
dc.identifier.citationEbrahimabadi, Mohammad et al.; Countering PUF Modeling Attacks through Adversarial Machine Learning; IEEE Computer Society Annual Symposium on VLSI (ISVLSI), July 2021;en_US
dc.identifier.urihttp://hdl.handle.net/11603/21843
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rights© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.titleCountering PUF Modeling Attacks through Adversarial Machine Learningen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ISVLSI_2021_1.pdf
Size:
1.5 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: