Auditing Approximate Machine Unlearning for Differentially Private Models

dc.contributor.authorGu, Yuechun
dc.contributor.authorHe, Jiajie
dc.contributor.authorChen, Keke
dc.date.accessioned2025-10-03T19:33:53Z
dc.date.issued2025-08-26
dc.descriptionICDM2025, 25th IEEE International Conference on Data Mining,November 12 - 15, 2025,Washington DC, USA
dc.description.abstractApproximate machine unlearning aims to remove the effect of specific data from trained models to ensure individuals' privacy. Existing methods focus on the removed records and assume the retained ones are unaffected. However, recent studies on the privacy onion effect indicate this assumption might be incorrect. Especially when the model is differentially private, no study has explored whether the retained ones still meet the differential privacy (DP) criterion under existing machine unlearning methods. This paper takes a holistic approach to auditing both unlearned and retained samples' privacy risks after applying approximate unlearning algorithms. We propose the privacy criteria for unlearned and retained samples, respectively, based on the perspectives of DP and membership inference attacks (MIAs). To make the auditing process more practical, we also develop an efficient MIA, A-LiRA, utilizing data augmentation to reduce the cost of shadow model training. Our experimental findings indicate that existing approximate machine unlearning algorithms may inadvertently compromise the privacy of retained samples for differentially private models, and we need differentially private unlearning algorithms. For reproducibility, we have pubished our code: https://anonymous.4open.science/r/Auditing-machine-unlearning-CB10/README.md
dc.description.urihttp://arxiv.org/abs/2508.18671
dc.format.extent10 pages
dc.genreconference papers and proceedings
dc.genrepostprints
dc.identifierdoi:10.13016/m2hkzy-hj0h
dc.identifier.urihttps://doi.org/10.48550/arXiv.2508.18671
dc.identifier.urihttp://hdl.handle.net/11603/40357
dc.language.isoen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsAttribution-ShareAlike 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-sa/4.0/
dc.subjectComputer Science - Artificial Intelligence
dc.subjectComputer Science - Machine Learning
dc.subjectUMBC Cyber Defense Lab (CDL)
dc.titleAuditing Approximate Machine Unlearning for Differentially Private Models
dc.typeText
dcterms.creatorhttps://orcid.org/0009-0006-4945-7310
dcterms.creatorhttps://orcid.org/0009-0009-7956-8355
dcterms.creatorhttps://orcid.org/0000-0002-9996-156X

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AuditingApproximate.pdf
Size:
292.09 KB
Format:
Adobe Portable Document Format