Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation
| dc.contributor.author | Mohseni, Seyedreza | |
| dc.contributor.author | Mohammadi, Ali | |
| dc.contributor.author | Tilwani, Deepa | |
| dc.contributor.author | Saxena, Yash | |
| dc.contributor.author | Ndawula, Gerald | |
| dc.contributor.author | Vema, Sriram | |
| dc.contributor.author | Raff, Edward | |
| dc.contributor.author | Gaur, Manas | |
| dc.date.accessioned | 2025-01-31T18:24:11Z | |
| dc.date.available | 2025-01-31T18:24:11Z | |
| dc.date.issued | 2024-12-24 | |
| dc.description.abstract | Malware authors often employ code obfuscations to make their malware harder to detect. Existing tools for generating obfuscated code often require access to the original source code (e.g., C++ or Java), and adding new obfuscations is a non-trivial, labor-intensive process. In this study, we ask the following question: Can Large Language Models (LLMs) potentially generate a new obfuscated assembly code? If so, this poses a risk to anti-virus engines and potentially increases the flexibility of attackers to create new obfuscation patterns. We answer this in the affirmative by developing the MetamorphASM benchmark comprising MetamorphASM Dataset (MAD) along with three code obfuscation techniques: dead code, register substitution, and control flow change. The MetamorphASM systematically evaluates the ability of LLMs to generate and analyze obfuscated code using MAD, which contains 328,200 obfuscated assembly code samples. We release this dataset and analyze the success rate of various LLMs (e.g., GPT-3.5/4, GPT-4o-mini, Starcoder, CodeGemma, CodeLlama, CodeT5, and LLaMA 3.1) in generating obfuscated assembly code. The evaluation was performed using established information-theoretic metrics and manual human review to ensure correctness and provide the foundation for researchers to study and develop remediations to this risk. The source code can be found at the following GitHub link: https://github.com/mohammadi-ali/MetamorphASM. | |
| dc.description.sponsorship | We acknowledge the support from UMBC Cybersecurity Leadership – Exploratory Grant Program. Any opinions, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UMBC or Booz Allen Hamilton. | |
| dc.description.uri | http://arxiv.org/abs/2412.16135 | |
| dc.format.extent | 9 pages | |
| dc.genre | journal articles | |
| dc.genre | postprints | |
| dc.identifier | doi:10.13016/m2apdj-1lgs | |
| dc.identifier.uri | https://doi.org/10.48550/arXiv.2412.16135 | |
| dc.identifier.uri | http://hdl.handle.net/11603/37563 | |
| dc.language.iso | en_US | |
| dc.publisher | AAAI | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
| dc.relation.ispartof | UMBC Student Collection | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.rights | Attribution 4.0 International | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | Computer Science - Artificial Intelligence | |
| dc.subject | Computer Science - Cryptography and Security | |
| dc.subject | Computer Science - Computation and Language | |
| dc.subject | UMBC Ebiquity Research Group | |
| dc.title | Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0009-0006-6081-9896 | |
| dcterms.creator | https://orcid.org/0000-0002-5411-2230 | |
| dcterms.creator | https://orcid.org/0009-0001-2548-4705 |
Files
Original bundle
1 - 1 of 1
