The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context

Department

Program

Citation of Original Publication

Hanson, Alyssa, Nichole Starr, Cloe Emnett, Ruchen Wen, Bertram Malle, and Tom Williams. “The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context,” ACM/IEEE International Conference on Human-Robot Interaction, January 2024. https://doi.org/10.1145/3610977.3634942.

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
CC BY-NC-SA 4.0 DEED Attribution-NonCommercial-ShareAlike 4.0 International

Subjects

Abstract

Due to their unique persuasive power, language-capable robots must be able to both adhere to and communicate human moral norms. These requirements are complicated by the possibility that people may blame humans and robots differently for violating those norms. These complications raise particular challenges for robots giving moral advice to decision makers, as advisors and deciders may be blamed differently for endorsing the same moral action. In this work, we thus explore how people morally evaluate human and robot advisors to human and robot deciders. In Experiment 1 (= 555), we examine human blame judgments of robot and human moral advisors and find clear evidence for an advice as decision hypothesis: advisors are blamed similarly to how they would be blamed for making the decisions they advised. In Experiment 2 (= 1326), we examine blame judgments of a robot or human decider following the advice of a robot or human moral advisor. We replicate the results from Experiment 1 and also find clear evidence for a differential dismissal hypothesis: moral deciders are penalized for ignoring moral advice, especially when a robot ignores human advice. Our results raise novel questions about people's perception of moral advice, especially when it involves robots, and present challenges for the design of morally competent robots.