Static Malware Detection & Subterfuge: Quantifying the Robustness of Machine Learning and Current Anti-Virus
Links to Fileshttp://ceur-ws.org/Vol-2269/FSS-18_paper_11.pdf
MetadataShow full item record
Type of Work8 pages
conference papers and proceedings
Citation of Original PublicationWilliam Fleshman, Edward Raff, Richard Zak, Mark McLean, Charles Nicholas, Static Malware Detection & Subterfuge: Quantifying the Robustness of Machine Learning and Current Anti-Virus, Proceedings of the AAAI Fall 2018 Symposium on Adversary-Aware Learning Techniques and Trends in Cybersecurity, 2018, http://ceur-ws.org/Vol-2269/FSS-18_paper_11.pdf
RightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Copyright © by the paper’s authors.
As machine-learning (ML) based systems for malware detection become more prevalent, it becomes necessary to quantify the benefits compared to the more traditional anti-virus (AV) systems widely used today. It is not practical to build an agreed upon test set to benchmark malware detection systems on pure classification performance. Instead we tackle the problem by creating a new testing methodology, where we evaluate the change in performance on a set of known benign & malicious files as adversarial modifications are performed. The change in performance combined with the evasion techniques then quantifies a system’s robustness against that approach. Through these experiments we are able to show in a quantifiable way how purely ML based systems can be more robust than AV products at detecting malware that attempts evasion through modification, but may be slower to adapt in the face of significantly novel attacks.