Preventing Discriminatory Decision-making in Evolving Data Streams
Loading...
Links to Files
Author/Creator ORCID
Date
2023-06-12
Department
Program
Citation of Original Publication
Zichong Wang, Nripsuta Saxena, Tongjia Yu, Sneha Karki, Tyler Zetty, Israat Haque, Shan Zhou, Dukka Kc, Ian Stockwell, Xuyu Wang, Albert Bifet, and Wenbin Zhang. 2023. Preventing Discriminatory Decision-making in Evolving Data Streams. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23). Association for Computing Machinery, New York, NY, USA, 149–159. https://doi.org/10.1145/3593013.3593984
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Subjects
Abstract
Bias in machine learning has rightly received significant attention over the last decade. However, most fair machine learning (fair-ML)
work to address bias in decision-making systems has focused solely on the offline setting. Despite the wide prevalence of online
systems in the real world, work on identifying and correcting bias in the online setting is severely lacking. The unique challenges
of the online environment make addressing bias more difficult than in the offline setting. First, Streaming Machine Learning (SML)
algorithms must deal with the constantly evolving real-time data stream. Second, they need to adapt to changing data distributions
(concept drift) to make accurate predictions on new incoming data. Adding fairness constraints to this already complicated task is not
straightforward. In this work, we focus on the challenges of achieving fairness in biased data streams while accounting for the presence
of concept drift, accessing one sample at a time. We present Fair Sampling over Stream (𝐹𝑆2
), a novel fair rebalancing approach capable
of being integrated with SML classification algorithms. Furthermore, we devise the first unified performance-fairness metric, Fairness
Bonded Utility (FBU), to evaluate and compare the trade-off between performance and fairness of different bias mitigation methods
efficiently. FBU simplifies the comparison of fairness-performance trade-offs of multiple techniques through one unified and intuitive
evaluation, allowing model designers to easily choose a technique. Overall, extensive evaluations show our measures surpass those of
other fair online techniques previously reported in the literature.