Machine Learning Security as a Source of Unfairness in Human-Robot Interaction
Links to Files
https://iral.cs.umbc.edu/Pubs/Richards2023DEIHRI.pdfPermanent Link
http://hdl.handle.net/11603/28085Collections
Metadata
Show full item recordDate
2023Type of Work
3 pagesText
journal articles
preprints
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.Abstract
Machine learning models that sense human speech, body placement, and other key features are commonplace in human-robot
interaction. However, the deployment of such models in themselves
is not without risk. Research in the security of machine learning
examines how such models can be exploited and the risks associated with these exploits. Unfortunately, the threat models of risks
produced by machine learning security do not incorporate the rich
sociotechnical underpinnings of the defenses they propose; as a
result, efforts to improve the security of machine learning models
may actually increase the difference in performance across different
demographic groups, yielding systems that have risk mitigation that
work better for one group than another. In this work, we outline
why current approaches to machine learning security present DEI
concerns for the human-robot interaction community and where
there are open areas for collaboration.