Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning

dc.contributor.authorWu, Yusen
dc.contributor.authorChen, Hao
dc.contributor.authorWang, Xin
dc.contributor.authorLiu, Chao
dc.contributor.authorNguyen, Phuong
dc.contributor.authorYesha, Yelena
dc.date.accessioned2021-10-07T17:56:14Z
dc.date.available2021-10-07T17:56:14Z
dc.date.issued2021-09-05
dc.description.abstractAdversarial attacks attempt to disrupt the training, retraining and utilizing of artificial intelligence and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For example, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models' parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.en_US
dc.description.sponsorshipWe gratefully acknowledge the support of the NSF through grant IIP-1919159. We also acknowledge the support of the IBM research team.en_US
dc.description.urihttps://arxiv.org/abs/2109.02018en_US
dc.format.extent10 pagesen_US
dc.genrejournal articlesen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2nnqk-wqj5
dc.identifier.urihttp://hdl.handle.net/11603/23068
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.titleTolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learningen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-8032-7382en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2109.02018.pdf
Size:
956.47 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: