Can We Obtain Fairness For Free?
dc.contributor.author | Islam, Rashidul | |
dc.contributor.author | Pan, Shimei | |
dc.contributor.author | Foulds, James | |
dc.date.accessioned | 2025-01-08T15:08:51Z | |
dc.date.available | 2025-01-08T15:08:51Z | |
dc.date.issued | 2021-07-30 | |
dc.description | AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event USA, May 19 - 21, 2021 | |
dc.description.abstract | There is growing awareness that AI and machine learning systems can in some cases learn to behave in unfair and discriminatory ways with harmful consequences. However, despite an enormous amount of research, techniques for ensuring AI fairness have yet to see widespread deployment in real systems. One of the main barriers is the conventional wisdom that fairness brings a cost in predictive performance metrics such as accuracy which could affect an organization's bottom-line. In this paper we take a closer look at this concern. Clearly fairness/performance trade-offs exist, but are they inevitable? In contrast to the conventional wisdom, we find that it is frequently possible, indeed straightforward, to improve on a trained model's fairness without sacrificing predictive performance. We systematically study the behavior of fair learning algorithms on a range of benchmark datasets, showing that it is possible to improve fairness to some degree with no loss (or even an improvement) in predictive performance via a sensible hyper-parameter selection strategy. Our results reveal a pathway toward increasing the deployment of fair AI methods, with potentially substantial positive real-world impacts. | |
dc.description.sponsorship | This work was performed under the following financial assistance award: 60NANB18D227 from U.S. Department of Commerce, National Institute of Standards and Technology. This material is based upon work supported by the National Science Foundation under Grant No.’s IIS2046381; IIS1850023; IIS1927486. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | |
dc.description.uri | https://dl.acm.org/doi/10.1145/3461702.3462614 | |
dc.format.extent | 11 pages | |
dc.genre | conference papers and proceedings | |
dc.identifier | doi:10.13016/m2hwu8-zwix | |
dc.identifier.citation | Islam, Rashidul, Shimei Pan, and James R. Foulds. “Can We Obtain Fairness For Free?” In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 586–96. AIES ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3461702.3462614. | |
dc.identifier.uri | https://doi.org/10.1145/3461702.3462614 | |
dc.identifier.uri | http://hdl.handle.net/11603/37194 | |
dc.language.iso | en_US | |
dc.publisher | ACM | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Information Systems Department | |
dc.relation.ispartof | UMBC College of Engineering and Information Technology Dean's Office | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
dc.subject | AI Fairness | |
dc.subject | Machine Learning | |
dc.subject | Hyperparameter Tuning | |
dc.subject | Predictive Performance | |
dc.subject | Gerrymandering Errors | |
dc.subject | Model Deployment | |
dc.subject | Bias and Discrimination | |
dc.title | Can We Obtain Fairness For Free? | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0001-5276-5708 | |
dcterms.creator | https://orcid.org/0000-0002-5989-8543 | |
dcterms.creator | https://orcid.org/0000-0003-0935-4182 |