Confident federated learning to tackle label flipped data poisoning attacks

dc.contributor.authorOvi, Pretom Roy
dc.contributor.authorGangopadhyay, Aryya
dc.contributor.authorErbacher, Robert F.
dc.contributor.authorBusart, Carl
dc.date.accessioned2023-07-06T22:05:39Z
dc.date.available2023-07-06T22:05:39Z
dc.date.issued2023-06-12
dc.descriptionSPIE Defense + Commercial Sensing, 2023, Orlando, Florida, United Statesen_US
dc.description.abstractFederated Learning (FL) enables collaborative model building among a large number of participants without revealing the sensitive data to the central server. However, because of its distributed nature, FL has limited control over the local data and corresponding training process. Therefore, it is susceptible to data poisoning attacks where malicious workers use malicious training data to train the model. Furthermore, attackers on the worker side can easily manipulate local data by swapping the labels of training instances to initiate data poisoning attacks. And local workers under such attacks carry incorrect information to the server, poison the global model, and cause misclassifications. So, detecting and preventing poisonous training samples from local training is crucial in federated training. To address it, we propose a federated learning framework, namely Confident Federated Learning to prevent data poisoning attacks on local workers. Here, we first validate the label quality of training samples by characterizing and identifying label errors in the training data and then exclude the detected mislabeled samples from the local training. To this aim, we experiment with our proposed approach on MNIST, Fashion-MNIST, and CIFAR-10 dataset and experimental results validated the robustness of the proposed framework against the data poisoning attacks by successfully detecting the mislabeled samples with above 85% accuracy.en_US
dc.description.sponsorshipThis research is supported by U.S. Army Grant No. W911NF21-20076.en_US
dc.description.urihttps://www.spiedigitallibrary.org/conference-proceedings-of-spie/12538/125380Z/Confident-federated-learning-to-tackle-label-flipped-data-poisoning-attacks/10.1117/12.2663911.fullen_US
dc.format.extent11 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrejournal articlesen_US
dc.genrepresentations (communicative events)en_US
dc.identifierdoi:10.13016/m2mcyj-kztn
dc.identifier.citationPretom Roy Ovi, Aryya Gangopadhyay, Robert F. Erbacher, Carl Busart, "Confident federated learning to tackle label flipped data poisoning attacks," Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380Z (12 June 2023); doi: 10.1117/12.2663911en_US
dc.identifier.urihttps://doi.org/10.1117/12.2663911
dc.identifier.urihttp://hdl.handle.net/11603/28449
dc.language.isoen_USen_US
dc.publisherSPIEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.en_US
dc.rightsPublic Domain Mark 1.0*
dc.rights.urihttp://creativecommons.org/publicdomain/mark/1.0/*
dc.titleConfident federated learning to tackle label flipped data poisoning attacksen_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
125380Z.pdf
Size:
1.06 MB
Format:
Adobe Portable Document Format
Description:
Main article
Loading...
Thumbnail Image
Name:
Presentation Transcript.pdf
Size:
61.72 KB
Format:
Adobe Portable Document Format
Description:
Transcript

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: