Mixed Precision Quantization to Tackle Gradient Leakage Attacks in Federated Learning
Loading...
Links to Files
Author/Creator
Author/Creator ORCID
Date
2022-10-22
Type of Work
Department
Program
Citation of Original Publication
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Attribution 4.0 International (CC BY 4.0)
Attribution 4.0 International (CC BY 4.0)
Subjects
Abstract
Federated Learning (FL) enables collaborative model building among a large number of participants
without the need for explicit data sharing. But this approach shows vulnerabilities when privacy
inference attacks are applied to it. In particular, in the event of a gradient leakage attack, which has a
higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk
due to the presence of communication in their inherent architecture. The most alarming thing about
this gradient leakage attack is that it can be performed in such a covert way that it doesn’t hamper the
training performance while the attackers backtrack from the gradients to get information about the
raw data. Two of the most common approaches proposed as solutions to this issue are homomorphic
encryption and adding noise with differential privacy parameters. These two approaches suffer from
two major drawbacks. They are: the key generation process becomes tedious with the increasing
number of clients, and noise-based differential privacy suffers from a significant drop in global
model accuracy. As a countermeasure, we propose a mixed-precision quantized FL scheme, and we
empirically show that both of the issues addressed above can be resolved. In addition, our approach
can ensure more robustness as different layers of the deep model are quantized with different precision
and quantization modes. We empirically proved the validity of our method with three benchmark
datasets and found a minimal accuracy drop in the global model after applying quantization.