A General Framework for Auditing Differentially Private Machine Learning

dc.contributor.authorLu, Fred
dc.contributor.authorMunoz, Joseph
dc.contributor.authorFuchs, Maya
dc.contributor.authorLeBlond, Tyler
dc.contributor.authorZaresky-Williams, Elliott
dc.contributor.authorRaff, Edward
dc.contributor.authorFerraro, Francis
dc.contributor.authorTesta,  Brian
dc.date.accessioned2022-11-10T17:26:04Z
dc.date.available2022-11-10T17:26:04Z
dc.date.issued2022-10-31
dc.descriptionThirty-Sixth Conference on Neural Information Processing Systems, NeurIPS 2022, Nov 28 2022, New Orleans, Louisiana, United States of America.
dc.description.abstractWe present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse. When violations are not present, it can aid in understanding the amount of information that can be leaked from a given dataset, algorithm, and privacy specification.en_US
dc.description.sponsorshipApproved for Public Release; Distribution Unlimited. PA #: AFRL-2022-3247.en_US
dc.description.urihttps://openreview.net/forum?id=AKM3C3tsSx3en_US
dc.format.extent12 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.identifierdoi:10.13016/m2k11n-bypn
dc.identifier.citationLu, Fred, Joseph Munoz, Maya Fuchs, Tyler LeBlond, Elliott V. Zaresky-Williams, Edward Raff, Francis Ferraro, and Brian Testa. “A General Framework for Auditing Differentially Private Machine Learning,” Thirty-Sixth Conference on Neural Information Processing Systems 2022. https://openreview.net/forum?id=AKM3C3tsSx3.
dc.identifier.urihttp://hdl.handle.net/11603/26290
dc.language.isoen_USen_US
dc.publisherOpenReview
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.en_US
dc.rightsPublic Domain Mark 1.0*
dc.rights.urihttp://creativecommons.org/publicdomain/mark/1.0/*
dc.subjectUMBC Ebiquity Research Group
dc.titleA General Framework for Auditing Differentially Private Machine Learningen_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
7637_a_general_framework_for_auditi.pdf
Size:
1 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: