Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits!
dc.contributor.author | Patel, Tirth | |
dc.contributor.author | Lu, Fred | |
dc.contributor.author | Raff, Edward | |
dc.contributor.author | Nicholas, Charles | |
dc.contributor.author | Matuszek, Cynthia | |
dc.contributor.author | Holt, James | |
dc.date.accessioned | 2024-01-12T13:11:46Z | |
dc.date.available | 2024-01-12T13:11:46Z | |
dc.date.issued | 2023-12-25 | |
dc.description | CAMLIS’23: Conference on Applied Machine Learning for Information Security, October 19–20, 2023, Arlington, VA | |
dc.description.abstract | Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines, meaning a 0.1\% change can cause an overwhelming number of false positives. However, academic research is often restrained to public datasets on the order of ten thousand samples and is too small to detect improvements that may be relevant to industry. Working within these constraints, we devise an approach to generate a benchmark of configurable difficulty from a pool of available samples. This is done by leveraging malware family information from tools like AVClass to construct training/test splits that have different generalization rates, as measured by a secondary model. Our experiments will demonstrate that using a less accurate secondary model with disparate features is effective at producing benchmarks for a more sophisticated target model that is under evaluation. We also ablate against alternative designs to show the need for our approach. | |
dc.description.uri | https://arxiv.org/abs/2312.15813 | |
dc.format.extent | 12 pages | |
dc.genre | conference papers and proceedings | |
dc.genre | preprints | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2312.15813 | |
dc.identifier.uri | http://hdl.handle.net/11603/31274 | |
dc.language.iso | en_US | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
dc.rights | CC BY 4.0 DEED Attribution 4.0 International | en |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.title | Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0009-0003-3212-8156 | |
dcterms.creator | https://orcid.org/0000-0001-9494-7139 | |
dcterms.creator | https://orcid.org/0000-0003-1383-8120 |