The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context

dc.contributor.authorHanson, Alyssa
dc.contributor.authorStarr, Nichole D
dc.contributor.authorEmnett, Cloe
dc.contributor.authorWen, Ruchen
dc.contributor.authorMalle, Bertram F.
dc.contributor.authorWilliams, Tom
dc.date.accessioned2024-01-18T04:02:23Z
dc.date.available2024-01-18T04:02:23Z
dc.date.issued2024-01
dc.descriptionHRI ’24, March 11–14, 2024, Boulder, CO, USA
dc.description.abstractDue to their unique persuasive power, language-capable robots must be able to both adhere to and communicate human moral norms. These requirements are complicated by the possibility that people may blame humans and robots differently for violating those norms. These complications raise particular challenges for robots giving moral advice to decision makers, as advisors and deciders may be blamed differently for endorsing the same moral action. In this work, we thus explore how people morally evaluate human and robot advisors to human and robot deciders. In Experiment 1 (= 555), we examine human blame judgments of robot and human moral advisors and find clear evidence for an advice as decision hypothesis: advisors are blamed similarly to how they would be blamed for making the decisions they advised. In Experiment 2 (= 1326), we examine blame judgments of a robot or human decider following the advice of a robot or human moral advisor. We replicate the results from Experiment 1 and also find clear evidence for a differential dismissal hypothesis: moral deciders are penalized for ignoring moral advice, especially when a robot ignores human advice. Our results raise novel questions about people's perception of moral advice, especially when it involves robots, and present challenges for the design of morally competent robots.
dc.description.sponsorshipTom Williams’ work was funded in part by National Science Foundation grant, and by Air Force Office of Scientific Research (AFOSR) Young Investigator Award 19RT0497. Bertram Malle’s work was funded in part by AFOSR Award FA9550-21-1-0359. The authors would like to thank Katherine Aubert for her assistance.
dc.format.extent10 pages
dc.genreconference papers and proceedings
dc.identifier.citationHanson, Alyssa, Nichole Starr, Cloe Emnett, Ruchen Wen, Bertram Malle, and Tom Williams. “The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context,” ACM/IEEE International Conference on Human-Robot Interaction, January 2024. https://doi.org/10.1145/3610977.3634942.
dc.identifier.urihttp://dx.doi.org/10.1145/3610977.3634942
dc.identifier.urihttp://hdl.handle.net/11603/31359
dc.language.isoen_US
dc.publisherACM
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsCC BY-NC-SA 4.0 DEED Attribution-NonCommercial-ShareAlike 4.0 International en
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0/
dc.titleThe Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context Human and Robot Advisors and Deciders in a Moral Advising Context
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0003-1590-1787

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
HansonStarr2024HRI__HRI015_5.pdf
Size:
2.28 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: