Secure Federated Training: Detecting Compromised Nodes and Identifying the Type of Attacks

dc.contributor.authorOvi, Pretom Roy
dc.contributor.authorGangopadhyay, Aryya
dc.contributor.authorErbacher, Robert F.
dc.contributor.authorBusart, Carl
dc.date.accessioned2023-09-06T20:21:44Z
dc.date.available2023-09-06T20:21:44Z
dc.date.issued2023-03-23
dc.description2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA); Nassau, Bahamas; 12-14 December 2022en_US
dc.description.abstractFederated learning (FL) allows a set of clients to collaboratively train a model without sharing private data. As a result, FL has limited control over the local data and corresponding training process. Therefore, it is susceptible to poisoning attacks in which malicious clients use malicious training data or local updates to poison the global model. In this work, we first studied the data level and model level poisoning attacks. We simulated model poisoning attacks by tampering the local model updates during each round of communication and data poisoning attacks by training a few clients on malicious data. And clients under such attacks carry faulty information to the server, poison the global model, and restrict it from convergence. Therefore, detecting clients under attacks as well as identifying the type of attacks are required to recover the clients from their malicious status. To address these issues, we proposed a way under federated framework that enables the detection of malicious clients and attack types while ensuring data privacy. Our clustering-based approach utilizes the neuron’s activations from the local models to identify the type of poisoning attacks. We also proposed to check the weight distribution of local model updates among the participating clients to detect malicious clients. Our experimental results validated the robustness of the proposed framework against the attacks mentioned above by successfully detecting the compromised clients and the attack types. Moreover, the global model trained on MNIST data couldn’t reach the optimal point even after 75 rounds because of malicious clients, whereas the proposed approach by detecting the malicious clients ensured convergence within only 30 rounds and 40 rounds in independent and identically distributed (IID) and non-independent and identically distributed (non-IID) setup respectively.en_US
dc.description.sponsorshipThis research is partially supported by NSF Grant No. 1923982 and U.S. Army Grant No. W911NF21-20076.en_US
dc.description.urihttps://ieeexplore.ieee.org/document/10069230en_US
dc.format.extent6 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.identifierdoi:10.13016/m2esar-vrwa
dc.identifier.citationP. R. Ovi, A. Gangopadhyay, R. F. Erbacher and C. Busart, "Secure Federated Training: Detecting Compromised Nodes and Identifying the Type of Attacks," 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA), Nassau, Bahamas, 2022, pp. 1115-1120, doi: 10.1109/ICMLA55696.2022.00183.en_US
dc.identifier.urihttps://doi.org/10.1109/ICMLA55696.2022.00183
dc.identifier.urihttp://hdl.handle.net/11603/29600
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.en_US
dc.rightsPublic Domain Mark 1.0*
dc.rights.urihttp://creativecommons.org/publicdomain/mark/1.0/*
dc.titleSecure Federated Training: Detecting Compromised Nodes and Identifying the Type of Attacksen_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Secure_Federated_Training_Detecting_Compromised_Nodes_and_Identifying_the_Type_of_Attacks.pdf
Size:
8.74 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: