Intersectional AI: A Study of How Information Science Students Think about Ethics and Their Impact

Author/Creator ORCID

Date

2021-01-19

Department

Program

Citation of Original Publication

McDonald, Nora; Pan, Shimei; Intersectional AI: A Study of How Information Science Students Think about Ethics and Their Impact; ACM SIGCHI Conference 2020; https://www.youtube.com/watch?v=pZLxtnsrJQo

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

Recent literature has demonstrated the limited and, in some instances, waning role of ethical training in computing classes in the US. The capacity for AI to be inequitable or harmful is well documented, yet it’s an issue that continues to lack apparent urgency or effective mitigation. The question we raise in this paper is how to prepare future generations to recognize and grapple with the ethical concerns of a range of issues plaguing AI technologies, particularly when they are combined with surveillance technologies in ways that have grave implications for social participation and restriction—from risk assessment and bail assignment in criminal justice to public benefits distribution, and access to housing and other critical resources that enable security and success within society. The US is a mecca of information and computer science learning for Asian students whose experiences as minorities renders them familiar with, and vulnerable to, the societal bias that feeds artificial intelligence (AI) bias. Our goal was to better understand how students who are being educated to design AI systems think about these issues, and in particular, their sensitivity to intersectional considerations that work to heighten risk for vulnerable groups. In this paper we report on findings from qualitative interviews with 20 graduate students, 11 from an AI class and 9 from a Data Mining class. We find that students are not predisposed to think deeply about the implications of AI design for the privacy and well-being of others unless explicitly encouraged to do so. When they do, their thinking is focused through the lens of personal identity and experience, but their reflections tend to center on bias, an intrinsic feature of design, rather than on fairness, an outcome that requires them to imagine the consequences of AI. While they are, in fact, equipped to think about fairness when prompted by discussion and by design exercises that explicitly invite consideration of intersectionality and structural inequalities, many need help to do this empathy “work.” Notably, the students who do more frequently reflect on intersectional problems related to bias and fairness are also more likely to consider the connection between model attributes and bias and the interaction with context. Our findings suggest that experience with identity-based vulnerability promotes more analytically complex thinking about AI, lending further support to the argument that identity-related ethics should be integrated into computer science and data science curriculums, rather than positioned as a stand-alone course.