Mixed Quantization Enabled Federated Learning to Tackle Gradient Inversion Attacks
Loading...
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2023
Type of Work
Department
Program
Citation of Original Publication
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Subjects
Abstract
Federated Learning (FL) enables collaborative model
building among a large number of participants without the
need for explicit data sharing. But this approach shows vulnerabilities when gradient inversion attacks are applied to
it. FL models are at higher risk in the event of a gradient
inversion attacks, which has a higher success rate in retrieving sensitive data from the model gradients, due to the
presence of communication in their inherent architecture.
The most alarming thing about this gradient inversion attack is that it can be performed in such a covert way that it
does not hamper the training performance while the attackers backtrack from the gradients to get information about
the raw data. Some of the common existing approaches
proposed to prevent data reconstruction in the context of
FL are adding noise with differential privacy, homomorphic
encryption, and gradient pruning. These approaches suffer
from some major drawbacks, including a tedious key generation process during encryption with an increasing number
of clients, a significant performance drop, and difficulty in
selecting a suitable pruning ratio. As a countermeasure,
we propose a mixed quantization enabled FL scheme, and
we empirically show that issues addressed above can be
resolved. In addition, our approach can ensure more robustness as different layers of the deep model are quantized
with different precisions and quantization modes. We empirically proved the validity of our defense method against
both the iteration based and recursion based gradient inversion attacks and evaluated the performance of our proposed
FL framework on three benchmark datasets and found out
that our approach outperformed the baseline defense mechanisms.