Confident federated learning to tackle label flipped data poisoning attacks
dc.contributor.author | Ovi, Pretom Roy | |
dc.contributor.author | Gangopadhyay, Aryya | |
dc.contributor.author | Erbacher, Robert F. | |
dc.contributor.author | Busart, Carl | |
dc.date.accessioned | 2023-07-06T22:05:39Z | |
dc.date.available | 2023-07-06T22:05:39Z | |
dc.date.issued | 2023-06-12 | |
dc.description | SPIE Defense + Commercial Sensing, 2023, Orlando, Florida, United States | en_US |
dc.description.abstract | Federated Learning (FL) enables collaborative model building among a large number of participants without revealing the sensitive data to the central server. However, because of its distributed nature, FL has limited control over the local data and corresponding training process. Therefore, it is susceptible to data poisoning attacks where malicious workers use malicious training data to train the model. Furthermore, attackers on the worker side can easily manipulate local data by swapping the labels of training instances to initiate data poisoning attacks. And local workers under such attacks carry incorrect information to the server, poison the global model, and cause misclassifications. So, detecting and preventing poisonous training samples from local training is crucial in federated training. To address it, we propose a federated learning framework, namely Confident Federated Learning to prevent data poisoning attacks on local workers. Here, we first validate the label quality of training samples by characterizing and identifying label errors in the training data and then exclude the detected mislabeled samples from the local training. To this aim, we experiment with our proposed approach on MNIST, Fashion-MNIST, and CIFAR-10 dataset and experimental results validated the robustness of the proposed framework against the data poisoning attacks by successfully detecting the mislabeled samples with above 85% accuracy. | en_US |
dc.description.sponsorship | This research is supported by U.S. Army Grant No. W911NF21-20076. | en_US |
dc.description.uri | https://www.spiedigitallibrary.org/conference-proceedings-of-spie/12538/125380Z/Confident-federated-learning-to-tackle-label-flipped-data-poisoning-attacks/10.1117/12.2663911.full | en_US |
dc.format.extent | 11 pages | en_US |
dc.genre | conference papers and proceedings | en_US |
dc.genre | journal articles | en_US |
dc.genre | presentations (communicative events) | en_US |
dc.identifier | doi:10.13016/m2mcyj-kztn | |
dc.identifier.citation | Pretom Roy Ovi, Aryya Gangopadhyay, Robert F. Erbacher, Carl Busart, "Confident federated learning to tackle label flipped data poisoning attacks," Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380Z (12 June 2023); doi: 10.1117/12.2663911 | en_US |
dc.identifier.uri | https://doi.org/10.1117/12.2663911 | |
dc.identifier.uri | http://hdl.handle.net/11603/28449 | |
dc.language.iso | en_US | en_US |
dc.publisher | SPIE | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Information Systems Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law. | en_US |
dc.rights | Public Domain Mark 1.0 | * |
dc.rights.uri | http://creativecommons.org/publicdomain/mark/1.0/ | * |
dc.title | Confident federated learning to tackle label flipped data poisoning attacks | en_US |
dc.type | Text | en_US |
Files
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 2.56 KB
- Format:
- Item-specific license agreed upon to submission
- Description: