Membership Inference Attacks on LLM-based Recommender Systems
| dc.contributor.author | He, Jiajie | |
| dc.contributor.author | Gu, Yuechun | |
| dc.contributor.author | Chen, Min-Chun | |
| dc.contributor.author | Chen, Keke | |
| dc.date.accessioned | 2025-10-22T19:58:06Z | |
| dc.date.issued | 2025-09-06 | |
| dc.description.abstract | Large language models (LLMs) based recommender systems (RecSys) can adapt to different domains flexibly. It utilizes in-context learning (ICL), i.e., prompts, to customize the recommendation functions, which include sensitive historical user-specific item interactions, including implicit feedback like clicked items or explicit product reviews. Such private information may be exposed by novel privacy attacks. However, no study has been done on this important issue. We design several membership inference attacks (MIAs) aimed to revealing whether system prompts include victims’ historical interactions. The attacks are direct inquiry, contrast, hallucination, and poisoning attacks, each utilizes some unique features of LLMs or RecSys. We have carefully evaluated them on four of the latest open-source LLMs and three well-known RecSys benchmark datasets. The results confirm that the MIA threat on LLM RecSys is realistic: direct inquiry, contrast, and poisoning attacks show significantly high attack advantages. We also discussed possible methods to mitigate such MIA threats. | |
| dc.description.uri | http://arxiv.org/abs/2508.18665 | |
| dc.format.extent | 10 pages | |
| dc.genre | journal articles | |
| dc.genre | preprints | |
| dc.identifier | doi:10.13016/m2lwtq-zxsm | |
| dc.identifier.uri | https://doi.org/10.48550/arXiv.2508.18665 | |
| dc.identifier.uri | http://hdl.handle.net/11603/40541 | |
| dc.language.iso | en | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Student Collection | |
| dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
| dc.subject | Computer Science - Cryptography and Security | |
| dc.subject | Computer Science - Information Retrieval | |
| dc.subject | UMBC Cyber Defense Lab (CDL) | |
| dc.subject | Computer Science - Machine Learning | |
| dc.subject | Computer Science - Artificial Intelligence | |
| dc.subject | Computer Science - Computation and Language | |
| dc.title | Membership Inference Attacks on LLM-based Recommender Systems | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0009-0009-7956-8355 | |
| dcterms.creator | https://orcid.org/0009-0006-4945-7310 | |
| dcterms.creator | https://orcid.org/0000-0002-9996-156X |
Files
Original bundle
1 - 1 of 1
