Towards A Unifying Human-Centered AI Fairness Framework

dc.contributor.authorRahman, Munshi Mahbubur
dc.contributor.authorPan, Shimei
dc.contributor.authorFoulds, James
dc.date.accessioned2026-02-03T18:14:42Z
dc.date.issued2025-09-04
dc.descriptionGoodIT '24: 2024 International Conference on Information Technology for Social Good, Bremen, Germany, September 4-6, 2024.
dc.description.abstractAchieving fairness in AI systems is a critical yet challenging task due to conflicting metrics and their underlying societal assumptions, e.g., the extent to which racist and sexist societal processes are presumed to cause harm and the extent to which we should apply affirmative corrections. Moreover, these measures often contradict each other and might also make the AI system less accurate. This work takes a step towards a unifying human-centered fairness framework to guide stakeholders in navigating these complexities, including their potential incompatibility and the corresponding trade-offs. Our framework acknowledges the spectrum of fairness definitions —individual vs. group fairness, infra-marginal (politically conservative) vs. intersectional (politically progressive) treatment of disparities— allowing stakeholders to prioritize desired outcomes by assigning weights to various fairness considerations, trading them off against each other, as well as predictive performance, supporting stakeholders in exploring the impacts of their fairness choices to achieve a consensus solution. Our learning algorithms then ensure the resulting AI system reflects the stakeholder-chosen priorities. By enabling multi-stakeholder compromises, our framework can potentially mitigate individual analysts’ subjectivity. We performed experiments to validate our methods on the UCI Adult census dataset and the COMPAS criminal recidivism dataset.
dc.description.sponsorshipThis material is based upon work supported by the National Science Foundation under Grant No.’s IIS1927486; IIS2046381. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
dc.description.urihttps://dl.acm.org/doi/10.1145/3677525.3678645
dc.format.extent6 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m2fwct-plsu
dc.identifier.citationRahman, Munshi Mahbubur, Shimei Pan, and James R. Foulds. “Towards A Unifying Human-Centered AI Fairness Framework.” Proceedings of the 2024 International Conference on Information Technology for Social Good, GoodIT ’24, September 4, 2024, 88–92. https://doi.org/10.1145/3677525.3678645.
dc.identifier.urihttps://doi.org/10.1145/3677525.3678645
dc.identifier.urihttp://hdl.handle.net/11603/41653
dc.language.isoen
dc.publisherACM
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC College of Engineering and Information Technology Dean's Office
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Information Systems Department
dc.relation.ispartofUMBC Information Systems Department
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectUMBC Accelerated Cognitive Cybersecurity Laboratory
dc.subjectUMBC Ebiquity Research Group
dc.titleTowards A Unifying Human-Centered AI Fairness Framework
dc.typeText
dcterms.creatorhttps://orcid.org/0009-0005-9032-3229
dcterms.creatorhttps://orcid.org/0000-0002-5989-8543
dcterms.creatorhttps://orcid.org/0000-0003-0935-4182

Files