Gu, YuechunHe, JiajieChen, Keke2024-12-112024-12-112024-10-30https://doi.org/10.48550/arXiv.2410.22651http://hdl.handle.net/11603/37027Training data privacy has been a top concern in AI modeling. While methods like differentiated private learning allow data contributors to quantify acceptable privacy loss, model utility is often significantly damaged. In practice, controlled data access remains a mainstream method for protecting data privacy in many industrial and research environments. In controlled data access, authorized model builders work in a restricted environment to access sensitive data, which can fully preserve data utility with reduced risk of data leak. However, unlike differential privacy, there is no quantitative measure for individual data contributors to tell their privacy risk before participating in a machine learning task. We developed the demo prototype FT-PrivacyScore to show that it's possible to efficiently and quantitatively estimate the privacy risk of participating in a model fine-tuning task. The demo source code will be available at \url{https://github.com/RhincodonE/demo_privacy_scoring}.3 pagesen-USAttribution-NonCommercial-ShareAlike 4.0 Internationalhttps://creativecommons.org/licenses/by-nc-sa/4.0/Computer Science - Cryptography and SecurityComputer Science - Machine LearningFT-PrivacyScore: Personalized Privacy Scoring Service for Machine Learning ParticipationText