Preventing Discriminatory Decision-making in Evolving Data Streams

dc.contributor.authorWang, Zichong
dc.contributor.authorSaxena, Nripsuta
dc.contributor.authorYu, Tongjia
dc.contributor.authorKarki, Sneha
dc.contributor.authorZetty,  Tyler
dc.contributor.authorHaque,  Israat
dc.contributor.authorZhou, Shan
dc.contributor.authorKc, Dukka
dc.contributor.authorStockwell, Ian
dc.contributor.authorBifet,  Albert
dc.contributor.authorZhang, Wenbin
dc.date.accessioned2023-03-22T21:57:26Z
dc.date.available2023-03-22T21:57:26Z
dc.date.issued2023-06-12
dc.descriptionFAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, June 2023
dc.description.abstractBias in machine learning has rightly received significant attention over the last decade. However, most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting. Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking. The unique challenges of the online environment make addressing bias more difficult than in the offline setting. First, Streaming Machine Learning (SML) algorithms must deal with the constantly evolving real-time data stream. Second, they need to adapt to changing data distributions (concept drift) to make accurate predictions on new incoming data. Adding fairness constraints to this already complicated task is not straightforward. In this work, we focus on the challenges of achieving fairness in biased data streams while accounting for the presence of concept drift, accessing one sample at a time. We present Fair Sampling over Stream (𝐹𝑆2 ), a novel fair rebalancing approach capable of being integrated with SML classification algorithms. Furthermore, we devise the first unified performance-fairness metric, Fairness Bonded Utility (FBU), to evaluate and compare the trade-off between performance and fairness of different bias mitigation methods efficiently. FBU simplifies the comparison of fairness-performance trade-offs of multiple techniques through one unified and intuitive evaluation, allowing model designers to easily choose a technique. Overall, extensive evaluations show our measures surpass those of other fair online techniques previously reported in the literature.en_US
dc.description.urihttps://dl.acm.org/doi/abs/10.1145/3593013.3593984en_US
dc.format.extent17 pagesen_US
dc.genrejournal articlesen_US
dc.genrepreprintsen_US
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m2hbg9-to5u
dc.identifier.citationZichong Wang, Nripsuta Saxena, Tongjia Yu, Sneha Karki, Tyler Zetty, Israat Haque, Shan Zhou, Dukka Kc, Ian Stockwell, Xuyu Wang, Albert Bifet, and Wenbin Zhang. 2023. Preventing Discriminatory Decision-making in Evolving Data Streams. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23). Association for Computing Machinery, New York, NY, USA, 149–159. https://doi.org/10.1145/3593013.3593984
dc.identifier.urihttps://doi.org/10.1145/3593013.3593984
dc.identifier.urihttp://hdl.handle.net/11603/27035
dc.language.isoen_USen_US
dc.publisherACM
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Staff Collection
dc.relation.ispartofA. All Hilltop Institute (UMBC) Works
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.titlePreventing Discriminatory Decision-making in Evolving Data Streamsen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-3995-339Xen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2302.08017.pdf
Size:
1.45 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: