Calibrating Practical Privacy Risks for Differentially Private Machine Learning
dc.contributor.author | Gu, Yuechun | |
dc.contributor.author | Chen, Keke | |
dc.date.accessioned | 2024-12-11T17:02:05Z | |
dc.date.available | 2024-12-11T17:02:05Z | |
dc.date.issued | 2024-10-30 | |
dc.description.abstract | Differential privacy quantifies privacy through the privacy budget ϵ, yet its practical interpretation is complicated by variations across models and datasets. Recent research on differentially private machine learning and membership inference has highlighted that with the same theoretical ϵ setting, the likelihood-ratio-based membership inference (LiRA) attacking success rate (ASR) may vary according to specific datasets and models, which might be a better indicator for evaluating real-world privacy risks. Inspired by this practical privacy measure, we study the approaches that can lower the attacking success rate to allow for more flexible privacy budget settings in model training. We find that by selectively suppressing privacy-sensitive features, we can achieve lower ASR values without compromising application-specific data utility. We use the SHAP and LIME model explainer to evaluate feature sensitivities and develop feature-masking strategies. Our findings demonstrate that the LiRA ASRᴹ on model M can properly indicate the inherent privacy risk of a dataset for modeling, and it's possible to modify datasets to enable the use of larger theoretical ϵ settings to achieve equivalent practical privacy protection. We have conducted extensive experiments to show the inherent link between ASR and the dataset's privacy risk. By carefully selecting features to mask, we can preserve more data utility with equivalent practical privacy protection and relaxed ϵ settings. The implementation details are shared online at the provided GitHub URL \url{https://anonymous.4open.science/r/On-sensitive-features-and-empirical-epsilon-lower-bounds-BF67/}. | |
dc.description.sponsorship | This work is partially supported by the National Science Foundation (Aware# 2232824). | |
dc.description.uri | http://arxiv.org/abs/2410.22673 | |
dc.format.extent | 10 pages | |
dc.genre | journal articles | |
dc.genre | preprints | |
dc.identifier | doi:10.13016/m2flfq-pbos | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2410.22673 | |
dc.identifier.uri | http://hdl.handle.net/11603/37025 | |
dc.language.iso | en_US | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
dc.relation.ispartof | UMBC Student Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-sa/4.0/ | |
dc.subject | Computer Science - Cryptography and Security | |
dc.subject | Computer Science - Machine Learning | |
dc.title | Calibrating Practical Privacy Risks for Differentially Private Machine Learning | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0002-9996-156X | |
dcterms.creator | https://orcid.org/0009-0006-4945-7310 |
Files
Original bundle
1 - 1 of 1