Mixed Quantization Enabled Federated Learning to Tackle Gradient Inversion Attacks

dc.contributor.authorOvi, Pretom Roy
dc.contributor.authorDey, Emon
dc.contributor.authorRoy, Nirmalya
dc.contributor.authorGangopadhyay, Aryya
dc.date.accessioned2023-06-20T19:13:15Z
dc.date.available2023-06-20T19:13:15Z
dc.date.issued2023
dc.descriptionThe IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023, Vancouver Convention Center, Canadaen_US
dc.description.abstractFederated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing. But this approach shows vulnerabilities when gradient inversion attacks are applied to it. FL models are at higher risk in the event of a gradient inversion attacks, which has a higher success rate in retrieving sensitive data from the model gradients, due to the presence of communication in their inherent architecture. The most alarming thing about this gradient inversion attack is that it can be performed in such a covert way that it does not hamper the training performance while the attackers backtrack from the gradients to get information about the raw data. Some of the common existing approaches proposed to prevent data reconstruction in the context of FL are adding noise with differential privacy, homomorphic encryption, and gradient pruning. These approaches suffer from some major drawbacks, including a tedious key generation process during encryption with an increasing number of clients, a significant performance drop, and difficulty in selecting a suitable pruning ratio. As a countermeasure, we propose a mixed quantization enabled FL scheme, and we empirically show that issues addressed above can be resolved. In addition, our approach can ensure more robustness as different layers of the deep model are quantized with different precisions and quantization modes. We empirically proved the validity of our defense method against both the iteration based and recursion based gradient inversion attacks and evaluated the performance of our proposed FL framework on three benchmark datasets and found out that our approach outperformed the baseline defense mechanisms.en_US
dc.description.sponsorshipWe acknowledge the support of the U.S. Army Grant #W911NF21-20076, NSF grant #1923982 and ONR grant #N00014-23-1-2119.en_US
dc.description.urihttps://openaccess.thecvf.com/content/CVPR2023W/FedVision/papers/Ovi_Mixed_Quantization_Enabled_Federated_Learning_To_Tackle_Gradient_Inversion_Attacks_CVPRW_2023_paper.pdfen_US
dc.format.extent9 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepostprintsen_US
dc.identifierdoi:10.13016/m24xpk-b3y7
dc.identifier.urihttp://hdl.handle.net/11603/28233
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.titleMixed Quantization Enabled Federated Learning to Tackle Gradient Inversion Attacksen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-1290-0378en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Ovi_Mixed_Quantization_Enabled_Federated_Learning_To_Tackle_Gradient_Inversion_Attacks_CVPRW_2023_paper.pdf
Size:
3.26 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: