Countering PUF Modeling Attacks through Adversarial Machine Learning
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2021-07
Type of Work
Department
Program
Citation of Original Publication
Ebrahimabadi, Mohammad et al.; Countering PUF Modeling Attacks through Adversarial Machine Learning; IEEE Computer Society Annual Symposium on VLSI (ISVLSI), July 2021;
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects
Abstract
A Physically Unclonable Function (PUF) is an effective option for device authentication, especially for IoT frameworks with resource-constrained devices. However, PUFs are
vulnerable to modeling attacks which build a PUF model using a
small subset of its Challenge-Response Pairs (CRPs). We propose
an effective countermeasure against such an attack by employing
adversarial machine learning techniques that introduce errors
(poison) to the adversary’s model. The approach intermittently
provides wrong response for the fed challenges. Coordination
among the communicating parties is pursued to prevent the
poisoned CRPs from causing the device authentication to fail. The
experimental results extracted for a PUF implemented on FPGA
demonstrate the efficacy of the proposed approach in thwarting
modeling attacks. We also discuss the resiliency of the proposed
scheme against impersonation and Sybil attacks.