Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition

dc.contributor.authorRichards, Luke E.
dc.contributor.authorRaff,  Edward
dc.contributor.authorMatuszek, Cynthia
dc.date.accessioned2023-03-22T23:06:54Z
dc.date.available2023-03-22T23:06:54Z
dc.date.issued2023-02-17
dc.description.abstractOver the past decade, the machine learning security community has developed a myriad of defenses for evasion attacks. An understudied question in that community is: for whom do these defenses defend? This work considers common approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations. We outline appropriate parity metrics for analysis and begin to answer this question through empirical results of the fairness implications of machine learning security methods. We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training. The framework we propose for measuring defense equality can be applied to robustly trained models, preprocessing-based defenses, and rejection methods. We identify a set of datasets with a user-centered application and a reasonable computational cost suitable for case studies in measuring the equality of defenses. In our case study of speech command recognition, we show how such adversarial training and augmentation have non-equal but complex protections for social subgroups across gender, accent, and age in relation to user coverage. We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups. This represents the first work examining the disparity in the adversarial robustness in the speech domain and the fairness evaluation of rejection-based defenses. en_US
dc.description.sponsorshipWe acknowledge that this work relies on the availability of labels for subgroups that can be difficult to obtain in all domains. We also recognize the limitations of such a case study in speech command recognition to not apply directly to other domains. Future work must find benchmarks to address this problem. We hope that by introducing these metrics in this domain and conducting case studies, we can provide awareness of the problem and spur work to address the development and research of equal defenses in machine learning securityen_US
dc.description.urihttps://arxiv.org/abs/2302.08973en_US
dc.format.extent21 pagesen_US
dc.genrejournal articlesen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2u5lz-ufzv
dc.identifier.urihttps://doi.org/10.48550/arXiv.2302.08973
dc.identifier.urihttp://hdl.handle.net/11603/27040
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.titleMeasuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognitionen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-9900-1972en_US
dcterms.creatorhttps://orcid.org/0000-0003-1383-8120en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2302.08973.pdf
Size:
2.25 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: