Latent Diffusion Unlearning: Protecting Against Unauthorized Personalization Through Trajectory Shifted Perturbations

dc.contributor.authorDevulapally, Naresh Kumar
dc.contributor.authorAgarwal, Shruti
dc.contributor.authorGokhale, Tejas
dc.contributor.authorLokhande, Vishnu Suresh
dc.date.accessioned2025-11-21T00:30:03Z
dc.date.issued2025-10-03
dc.description33rd ACM International Conference on Multimedia,October 27 - 31, 2025,Dublin, Ireland
dc.description.abstractText-to-image diffusion models have demonstrated remarkable effectiveness in rapid and high-fidelity personalization, even when provided with only a few user images. However, the effectiveness of personalization techniques has lead to concerns regarding data privacy, intellectual property protection, and unauthorized usage. To mitigate such unauthorized usage and model replication, the idea of generating ``unlearnable'' training samples utilizing image poisoning techniques has emerged. Existing methods for this have limited imperceptibility as they operate in the pixel space which results in images with noise and artifacts. In this work, we propose a novel model-based perturbation strategy that operates within the latent space of diffusion models. Our method alternates between denoising and inversion while modifying the starting point of the denoising trajectory: of diffusion models. This trajectory-shifted sampling ensures that the perturbed images maintain high visual fidelity to the original inputs while being resistant to inversion and personalization by downstream generative models. This approach integrates unlearnability into the framework of Latent Diffusion Models (LDMs), enabling a practical and imperceptible defense against unauthorized model adaptation. We validate our approach on four benchmark datasets to demonstrate robustness against state-of-the-art inversion attacks. Results demonstrate that our method achieves significant improvements in imperceptibility (∼8% -10% on perceptual metrics including PSNR, SSIM, and FID) and robustness ( ∼10% on average across five adversarial settings), highlighting its effectiveness in safeguarding sensitive data.
dc.description.sponsorshipProf. Lokhande thanks support provided by University at Buffalo Startup funds, Adobe Research Gift and internal funding from the University at Buffalo’s Research and Economic Development office. Dr. Tejas Gokhale was supported by UMBC’s Strategic Award for Research Transitions (START)
dc.description.urihttp://arxiv.org/abs/2510.03089
dc.format.extent16 pages
dc.genreconference papers and proceedings
dc.genrepreprints
dc.identifierdoi:10.13016/m2gvnj-76zd
dc.identifier.urihttps://doi.org/10.48550/arXiv.2510.03089
dc.identifier.urihttp://hdl.handle.net/11603/40828
dc.language.isoen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectComputer Science - Computer Vision and Pattern Recognition
dc.titleLatent Diffusion Unlearning: Protecting Against Unauthorized Personalization Through Trajectory Shifted Perturbations
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5593-2804

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
251003089v1.pdf
Size:
10.43 MB
Format:
Adobe Portable Document Format