Towards Robust Evaluation of Unlearning in LLMs via Data Transformations

dc.contributor.authorJoshi, Abhinav
dc.contributor.authorSaha, Shaswati
dc.contributor.authorShukla, Divyaksh
dc.contributor.authorVema, Sriram
dc.contributor.authorJhamtani, Harsh
dc.contributor.authorGaur, Manas
dc.contributor.authorModi, Ashutosh
dc.date.accessioned2024-12-11T17:02:39Z
dc.date.available2024-12-11T17:02:39Z
dc.date.issued2024-11
dc.descriptionFindings of the Association for Computational Linguistics: EMNLP 2024,November 12-16, 2024,Miami, Florida, USA
dc.description.abstractLarge Language Models (LLMs) have shown to be a great success in a wide range of applications ranging from regular NLP-based use cases to AI agents. LLMs have been trained on a vast corpus of texts from various sources; despite the best efforts during the data pre-processing stage while training the LLMs, they may pick some undesirable information such as personally identifiable information (PII). Consequently, in recent times research in the area of Machine Unlearning (MUL) has become active, the main idea is to force LLMs to forget (unlearn) certain information (e.g., PII) without suffering from performance loss on regular tasks. In this work, we examine the robustness of the existing MUL techniques for their ability to enable leakage-proof forgetting in LLMs. In particular, we examine the effect of data transformation on forgetting, i.e., is an unlearned LLM able to recall forgotten information if there is a change in the format of the input? Our findings on the TOFU dataset highlight the necessity of using diverse data formats to quantify unlearning in LLMs more reliably.
dc.description.urihttps://aclanthology.org/2024.findings-emnlp.706
dc.format.extent20 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m2kcfg-288b
dc.identifier.citationJoshi, Abhinav, Shaswati Saha, Divyaksh Shukla, Sriram Vema, Harsh Jhamtani, Manas Gaur, and Ashutosh Modi. “Towards Robust Evaluation of Unlearning in LLMs via Data Transformations.” In Findings of the Association for Computational Linguistics: EMNLP 2024, edited by Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, 12100–119. Miami, Florida, USA: Association for Computational Linguistics, 2024. https://aclanthology.org/2024.findings-emnlp.706.
dc.identifier.urihttps://doi.org/10.18653/v1/2024.findings-emnlp.706
dc.identifier.urihttp://hdl.handle.net/11603/37091
dc.language.isoen_US
dc.publisherAssociation for Computational Linguistics
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsAttribution 4.0 International CC BY 4.0
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectUMBC Ebiquity Research Group
dc.titleTowards Robust Evaluation of Unlearning in LLMs via Data Transformations
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2024.findingsemnlp.706.pdf
Size:
853.63 KB
Format:
Adobe Portable Document Format