Browsing by Author "McDonald, Nora"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Building for ‘We’: Safety Settings for Couples with Memory Concerns(Association for Computing Machinery, 2021-05) McDonald, Nora; Mentis, Helena M.Designing technologies that support the mutual cybersecurity and autonomy of older adults facing cognitive challenges requires close collaboration of partners. As part of research to design a Safety Setting application for older adults with memory loss or mild cognitive impairment (MCI), we use a scenario-based participatory design. Our study builds on previous findings that couples’ approach to memory loss was characterized by a desire for flexibility and choice, and an embrace of role uncertainty. We find that couples don't want a system that fundamentally alters their relationship and are looking to maximize self-surveillance competence and minimize loss of autonomy for their partners. All desire Safety Settings to maintain their mutual safety rather than designating one partner as the target of oversight. Couples are open to more rigorous surveillance if they have control over what types of activities trigger various levels of oversight.Item “Citizens Too”: Safety Setting Collaboration Among Older Adults with Memory Concerns(ACM SIGCHI, 2022-04-03) McDonald, Nora; Mentis, HelenaDesigning technologies that support the cybersecurity of older adults with memory concerns involves wrestling with an uncomfortable paradox between surveillance and independence and the close collaboration of couples. This research captures the interactions between older adult couples where one or both have memory concerns—a primary feature of cognitive decline—as they make decisions on how to safeguard their online activities using a Safety Setting probe we designed, and over the course of several informal interviews and a diary study. Throughout, couples demonstrated a collaborative mentality to which we apply a frame of citizenship in opensource collaboration, specifically (a) histories of participation, (b) lower barriers to participation, and (c) maintaining ongoing contribution. In this metaphor of collaborative enterprise, one partner (or member of the couple) may be the service provider and the other may be the participant, but at varying moments, they may switch roles while still maintaining a collaborative focus on preserving shared assets and freedom on the internet. We conclude with a discussion of what this service provider-contributor mentality means for empowerment through citizenship, and implications for vulnerable populations’ cybersecurity.Item 'Don't Fall for This': Communications about Cybersafety from the AARP(ACM, 2023-10-04) McDonald, Nora; Mentis, HelenaOlder adults face unique risks in trying to secure their online activities. They are not only the frequent targets of scams and fraud; they are the targets of a barrage of cybersafety communiqués whose impact is unclear. AARP, the United States advocacy group focusing on issues facing older adults over the age of 50, is among those educators whose strategies remain underexplored, yet their reach makes it imperative that we understand what they are saying, to whom, and to what effect. Drawing on an analysis of AARP publications about cybersafety and privacy, we sought to better understand their discourse on the topic. We report on findings that AARP's language may have the effect of portraying bad actors ("fraudsters") as individuals, rather than enterprises, which at the target end, personalizes interactions, placing too much onus on individual users to assess and deflect threats. AARP's positioning of, and guidance about, threats may sometimes prompt a thought process that puts users at the center of the narrative and may encourage engagement. Instructing older Americans, or anyone, on the forensics of cyber-sleuthing is enormously difficult. We conclude with a discussion of different approaches to cybersafety, one that involves educating older adults about the rudiments of surveillance capitalism.Item Elicitation and Empathy with AI-enhanced Adaptive Assistive Technologies (AATs): Towards Sustainable Inclusive Design Method Education(Aalborg University, 2023-10-10) McDonald, Nora; Massey, Aaron; Hamidi, FoadEfforts to include people with disabilities in design education are difficult to scale, and dynamics of participation need to be carefully planned to avoid putting unnecessary burdens on users. However, given the scale of emerging AI-enhanced technologies and their potential for creating new vulnerabilities for marginalized populations, new methods for generating empathy and self-reflection in technology design students (as the future creators of such technologies) are needed. We report on a study with Information Systems graduate students where they used a participatory elicitation toolkit to reflect on two cases of end-user privacy perspectives towards AI-enhanced tools in the age of surveillance capitalism: their own when using tools to support learning, and those of older adults using AI-enhanced adaptive assistive technologies (AATs) that help with pointing and typing difficulties. In drawing on the experiences of students with intersectional identities, our exploratory study aimed to incorporate intersectional thinking in privacy elicitation and further understand its role in enabling sustainable, inclusive design practice and education. While aware of the risks to their own privacy and the role of identity and power in shaping experiences of bias, students who used the toolkit were more sanguine about risks faced by AAT users—assuming more data equates to better technology. Our tool proved valuable for eliciting reflection but not empathy.Item Intersectional AI: A Study of How Information Science Students Think about Ethics and Their Impact(Association for Computing Machinery, 2021-01-19) McDonald, Nora; Pan, ShimeiRecent literature has demonstrated the limited and, in some instances, waning role of ethical training in computing classes in the US. The capacity for AI to be inequitable or harmful is well documented, yet it’s an issue that continues to lack apparent urgency or effective mitigation. The question we raise in this paper is how to prepare future generations to recognize and grapple with the ethical concerns of a range of issues plaguing AI technologies, particularly when they are combined with surveillance technologies in ways that have grave implications for social participation and restriction—from risk assessment and bail assignment in criminal justice to public benefits distribution, and access to housing and other critical resources that enable security and success within society. The US is a mecca of information and computer science learning for Asian students whose experiences as minorities renders them familiar with, and vulnerable to, the societal bias that feeds artificial intelligence (AI) bias. Our goal was to better understand how students who are being educated to design AI systems think about these issues, and in particular, their sensitivity to intersectional considerations that work to heighten risk for vulnerable groups. In this paper we report on findings from qualitative interviews with 20 graduate students, 11 from an AI class and 9 from a Data Mining class. We find that students are not predisposed to think deeply about the implications of AI design for the privacy and well-being of others unless explicitly encouraged to do so. When they do, their thinking is focused through the lens of personal identity and experience, but their reflections tend to center on bias, an intrinsic feature of design, rather than on fairness, an outcome that requires them to imagine the consequences of AI. While they are, in fact, equipped to think about fairness when prompted by discussion and by design exercises that explicitly invite consideration of intersectionality and structural inequalities, many need help to do this empathy “work.” Notably, the students who do more frequently reflect on intersectional problems related to bias and fairness are also more likely to consider the connection between model attributes and bias and the interaction with context. Our findings suggest that experience with identity-based vulnerability promotes more analytically complex thinking about AI, lending further support to the argument that identity-related ethics should be integrated into computer science and data science curriculums, rather than positioned as a stand-alone course.Item The Politics of Privacy Theories: Moving from Norms to Vulnerabilities(Association for Computing Machinery, 2020-04-25) McDonald, Nora; Forte, AndreaPrivacy and surveillance are central features of public discourse around use of computing systems. As the systems we design and study are increasingly used and regulated as potential instruments of surveillance, HCI researchers— even those whose focus is not privacy—find themselves needing to understand privacy in their work. Concepts like contextual integrity and boundary regulation have become touchstones for thinking about privacy in HCI. In this paper, we draw on HCI and privacy literature to understand the limitations of commonly used theories and examine their assumptions, politics, strengths, and weaknesses. We use a case study from the HCI literature to illustrate conceptual gaps in existing frameworks where privacy requirements can fall through. Finally, we advocate vulnerability as a core concept for privacy theorizing and examine how feminist, queer-Marxist, and intersectional thinking may augment our existing repertoire of privacy theories to create a more inclusive scholarship and design practice.Item Privacy and Power: Acknowledging the Importance of Privacy Research and Design for Vulnerable Populations(ACM, 2020-04) McDonald, Nora; Badillo-Urquiola, Karla; Ames, Morgan G.; Dell, Nicola; Keneski, Elizabeth; Sleeper, Manya; Wisniewski, Pamela J.Privacy researchers and designers must take into consideration the unique needs and challenges of vulnerable populations. Normative and privileged lenses can impair conceptualizations of identities and privacy needs, as well as reinforce or exacerbate power structures and struggles—and how they are formalized within privacy research methods, theories, designs, and analytical tools. The aim of this one-day workshop is to facilitate discourse around alternative ways of thinking about privacy and power, as well as ways for researching and designing technologies that not only respect the privacy needs of vulnerable populations but attempt to empower them. We will work towards developing best practices to help academics and industry folks, technologists, researchers, policy makers, and designers do a better job of serving the privacy needs of vulnerable users of technology.Item Realizing Choice: Online Safeguards for Couples Adapting to Cognitive Challenges(USENIX, 2020-08-11) McDonald, Nora; Larsen, Alison; Battisti, Allison; Madjaroff, Galina; Massey, Aaron; Mentis, HelenaThis paper investigates qualitatively what happens when couples facing a spectrum of options must arrive at consensual choices together. We conducted an observational study of couples experiencing memory concerns (one or both) while the partners engaged in the process of reviewing and selecting “Safety Setting” options for online activities. Couples’ choices tended to be influenced by a desire to secure shared assets through mutual surveillance and a desire to preserve autonomy by granting freedom in social and personal activities. The availability of choice suits the uneven and unpredictable process of memory loss and couples’ acknowledged uncertainty about its trajectory, leading them to anticipate changing Safety Settings as one or both of them experience further cognitive decline. Reflecting these three decision drivers, we conclude with implications for a design system that offers flexibility and adaptability in variety of settings, accommodates the uncertainty of memory loss, preserves autonomy, and supports collaborative management of shared assets.