A General Framework for Auditing Differentially Private Machine Learning
Links to Files
Permanent Link
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Lu, Fred, Joseph Munoz, Maya Fuchs, Tyler LeBlond, Elliott V. Zaresky-Williams, Edward Raff, Francis Ferraro, and Brian Testa. “A General Framework for Auditing Differentially Private Machine Learning,” Thirty-Sixth Conference on Neural Information Processing Systems
2022. https://openreview.net/forum?id=AKM3C3tsSx3.
Rights
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain Mark 1.0
Public Domain Mark 1.0
Subjects
Abstract
We present a framework to statistically audit the privacy guarantee conferred by a
differentially private machine learner in practice. While previous works have taken
steps toward evaluating privacy loss through poisoning attacks or membership
inference, they have been tailored to specific models or have demonstrated low
statistical power. Our work develops a general methodology to empirically evaluate
the privacy of differentially private machine learning implementations, combining
improved privacy search and verification methods with a toolkit of influence-based
poisoning attacks. We demonstrate significantly improved auditing power over
previous approaches on a variety of models including logistic regression, Naive
Bayes, and random forest. Our method can be used to detect privacy violations due
to implementation errors or misuse. When violations are not present, it can aid in
understanding the amount of information that can be leaked from a given dataset,
algorithm, and privacy specification.
