An Intersectional Definition of Fairness
dc.contributor.author | Foulds, James | |
dc.contributor.author | Pan, Shimei | |
dc.date.accessioned | 2018-09-05T17:10:14Z | |
dc.date.available | 2018-09-05T17:10:14Z | |
dc.date.issued | 2020-05-27 | |
dc.description | 2020 IEEE 36th International Conference on Data Engineering (ICDE), 20-24 April 2020, Dallas, TX, USA | |
dc.description.abstract | We propose differential fairness, a multi-attribute definition of fairness in machine learning which is informed by intersectionality, a critical lens arising from the humanities literature, leveraging connections between differential privacy and legal notions of fairness. We show that our criterion behaves sensibly for any subset of the set of protected attributes, and we prove economic, privacy, and generalization guarantees. We provide a learning algorithm which respects our differential fairness criterion. Experiments on the COMPAS criminal recidivism dataset and census data demonstrate the utility of our methods. | en_US |
dc.description.uri | https://ieeexplore.ieee.org/abstract/document/9101635 | en_US |
dc.format.extent | 16 pages | en_US |
dc.genre | conference papers and proceedings preprints | en_US |
dc.identifier | doi:10.13016/M2PV6BB0C | |
dc.identifier.citation | Foulds, James R., Rashidul Islam, Kamrun Naher Keya, and Shimei Pan. “An Intersectional Definition of Fairness.” In 2020 IEEE 36th International Conference on Data Engineering (ICDE), 1918–21, 2020. https://doi.org/10.1109/ICDE48307.2020.00203. | |
dc.identifier.uri | https://doi.org/10.1109/ICDE48307.2020.00203 | |
dc.identifier.uri | http://hdl.handle.net/11603/11227 | |
dc.language.iso | en_US | en_US |
dc.publisher | IEEE | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Information Systems Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |
dc.subject | Computer Science - Machine Learning | en_US |
dc.subject | Computer Science - Computers and Society | en_US |
dc.subject | Statistics - Machine Learning | en_US |
dc.subject | measure of fairness | |
dc.subject | algorithms and data | |
dc.subject | protected attributes | |
dc.subject | systems of power | |
dc.subject | systems of oppression | |
dc.subject | race | |
dc.subject | gender | |
dc.subject | sexual orientation | |
dc.subject | class | |
dc.subject | disability | |
dc.title | An Intersectional Definition of Fairness | en_US |
dc.type | Text | en_US |