The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context
| dc.contributor.author | Hanson, Alyssa | |
| dc.contributor.author | Starr, Nichole D | |
| dc.contributor.author | Emnett, Cloe | |
| dc.contributor.author | Wen, Ruchen | |
| dc.contributor.author | Malle, Bertram F. | |
| dc.contributor.author | Williams, Tom | |
| dc.date.accessioned | 2024-01-18T04:02:23Z | |
| dc.date.available | 2024-01-18T04:02:23Z | |
| dc.date.issued | 2024-01 | |
| dc.description | HRI ’24, March 11–14, 2024, Boulder, CO, USA | |
| dc.description.abstract | Due to their unique persuasive power, language-capable robots must be able to both adhere to and communicate human moral norms. These requirements are complicated by the possibility that people may blame humans and robots differently for violating those norms. These complications raise particular challenges for robots giving moral advice to decision makers, as advisors and deciders may be blamed differently for endorsing the same moral action. In this work, we thus explore how people morally evaluate human and robot advisors to human and robot deciders. In Experiment 1 (= 555), we examine human blame judgments of robot and human moral advisors and find clear evidence for an advice as decision hypothesis: advisors are blamed similarly to how they would be blamed for making the decisions they advised. In Experiment 2 (= 1326), we examine blame judgments of a robot or human decider following the advice of a robot or human moral advisor. We replicate the results from Experiment 1 and also find clear evidence for a differential dismissal hypothesis: moral deciders are penalized for ignoring moral advice, especially when a robot ignores human advice. Our results raise novel questions about people's perception of moral advice, especially when it involves robots, and present challenges for the design of morally competent robots. | |
| dc.description.sponsorship | Tom Williams’ work was funded in part by National Science Foundation grant, and by Air Force Office of Scientific Research (AFOSR) Young Investigator Award 19RT0497. Bertram Malle’s work was funded in part by AFOSR Award FA9550-21-1-0359. The authors would like to thank Katherine Aubert for her assistance. | |
| dc.format.extent | 10 pages | |
| dc.genre | conference papers and proceedings | |
| dc.identifier.citation | Hanson, Alyssa, Nichole Starr, Cloe Emnett, Ruchen Wen, Bertram Malle, and Tom Williams. “The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context,” ACM/IEEE International Conference on Human-Robot Interaction, January 2024. https://doi.org/10.1145/3610977.3634942. | |
| dc.identifier.uri | http://dx.doi.org/10.1145/3610977.3634942 | |
| dc.identifier.uri | http://hdl.handle.net/11603/31359 | |
| dc.language.iso | en_US | |
| dc.publisher | ACM | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
| dc.relation.ispartof | UMBC Student Collection | |
| dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
| dc.rights | CC BY-NC-SA 4.0 DEED Attribution-NonCommercial-ShareAlike 4.0 International | en |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc-sa/4.0/ | |
| dc.title | The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0003-1590-1787 |
