Membership Inference Attacks on LLM-based Recommender Systems

Department

Program

Citation of Original Publication

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author

Abstract

Large language models (LLMs) based recommender systems (RecSys) can adapt flexibly across different domains. It uses in-context learning (ICL), i.e., prompts, including sensitive historical user-specific item interactions, to customize the recommendation functions. However, no study has examined whether such private information may be exposed by novel privacy attacks. We design several membership inference attacks (MIAs): Similarity, Memorization, Inquiry, and Poisoning attacks, aiming to reveal whether system prompts include victims’ historical interactions. We have carefully evaluated them on the latest open-source LLMs and three well-known RecSys datasets. The results confirm that the MIA threat to LLM RecSys is realistic, and that existing promptbased defense methods may be insufficient to protect against these attacks.