Malware Classification from Memory Dumps Using Machine Learning, Transformers, and Large Language Models

dc.contributor.authorDweib, Areej
dc.contributor.authorTanina, Montaser
dc.contributor.authorAlawi, Shehab
dc.contributor.authorDyab, Mohammad
dc.contributor.authorAshqar, Huthaifa
dc.date.accessioned2025-10-16T15:27:12Z
dc.date.issued2025-03-04
dc.description2025 IEEE 4th International Conference on Computing and Machine Intelligence (ICMI), April 5-6, 2025, Mount Pleasant, MI, USA
dc.description.abstractThis study investigates the performance of various classification models for a malware classification task using different feature sets and data configurations. Six models-Logistic Regression, K-Nearest Neighbors (KNN), Support Vector Machines (SVM), Decision Trees, Random Forest (RF), and Extreme Gradient Boosting (XGB)-were evaluated alongside two deep learning models, Recurrent Neural Networks (RNN) and Transformers, as well as the Gemini zero-shot and few-shot learning methods. Four feature sets were tested including All Features, Literature Review Features, the Top 45 Features from RF, and Down-Sampled with Top 45 Features. XGB achieved the highest accuracy of 87.42% using the Top 45 Features, outperforming all other models. RF followed closely with 87.23% accuracy on the same feature set. In contrast, deep learning models underperformed, with RNN achieving 66.71% accuracy and Transformers reaching 71.59%. Down-sampling reduced performance across all models, with XGB dropping to 81.31%. Gemini zero-shot and few-shot learning approaches showed the lowest performance, with accuracies of 40.65% and 48.65%, respectively. The results highlight the importance of feature selection in improving model performance while reducing computational complexity. Traditional models like XGB and RF demonstrated superior performance, while deep learning and few-shot methods struggled to match their accuracy. This study underscores the effectiveness of traditional machine learning models for structured datasets and provides a foundation for future research into hybrid approaches and larger datasets.
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/11141051
dc.format.extent5 pages
dc.genreconference papers and proceedings
dc.genrepreprints
dc.identifierdoi:10.13016/m2b1oy-mujs
dc.identifier.citationDweib, Areej, Montaser Tanina, Shehab Alawi, Mohammad Dyab, and Huthaifa I. Ashqar. “Malware Classification from Memory Dumps Using Machine Learning, Transformers, and Large Language Models.” 2025 IEEE 4th International Conference on Computing and Machine Intelligence (ICMI), April 2025, 1–5. https://doi.org/10.1109/ICMI65310.2025.11141051.
dc.identifier.urihttps://doi.org/10.1109/ICMI65310.2025.11141051
dc.identifier.urihttp://hdl.handle.net/11603/40451
dc.language.isoen
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Data Science
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectComputer Science - Cryptography and Security
dc.subjectComputer Science - Machine Learning
dc.subjectComputer Science - Computation and Language
dc.titleMalware Classification from Memory Dumps Using Machine Learning, Transformers, and Large Language Models
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-6835-8338

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2503.02144v1.pdf
Size:
315.55 KB
Format:
Adobe Portable Document Format