Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions
dc.contributor.author | Matuszek, Cynthia | |
dc.contributor.author | Bo, Liefeng | |
dc.contributor.author | Zettlemoyer, Luke | |
dc.contributor.author | Fox, Dieter | |
dc.date.accessioned | 2018-09-05T20:57:39Z | |
dc.date.available | 2018-09-05T20:57:39Z | |
dc.date.issued | 2014-07 | |
dc.description | Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. | en_US |
dc.description.abstract | As robots become more ubiquitous, it is increasingly important for untrained users to be able to interact with them intuitively. In this work, we investigate how people refer to objects in the world during relatively unstructured communication with robots. We collect a corpus of deictic interactions from users describing objects, which we use to train language and gesture models that allow our robot to determine what objects are being indicated. We introduce a temporal extension to state-of-the-art hierarchical matching pursuit features to support gesture understanding, and demonstrate that combining multiple communication modalities more effectively captures user intent than relying on a single type of input. Finally, we present initial interactions with a robot that uses the learned models to follow commands while continuing to learn from user input. | en_US |
dc.description.sponsorship | The work was funded in part by the Intel Science and Technology Center for Pervasive Computing (ISTC-PC), by ARO grant W911NF-12-1-0197, and through collaborative participation in the Robotics Consortium sponsored by the U.S. Army Research Laboratory under the CTA Program (Cooperative Agreement W911NF-10- 2-0016). We also thank Fortiss GmbH and Manuel Giuliani for work on gathering and annotating the data corpus, substantial assistance, and many helpful conversations. | en_US |
dc.description.uri | https://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8327 | en_US |
dc.format.extent | 8 PAGES | en_US |
dc.genre | conference papers and proceedings preprints | en_US |
dc.identifier | doi:10.13016/M2RN30B52 | |
dc.identifier.citation | Cynthia Matuszek, Liefeng Bo, Luke Zettlemoyer, Dieter Fox, Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions, Twenty-Eighth AAAI Conference on Artificial Intelligence, 2018, https://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8327 | en_US |
dc.identifier.uri | http://hdl.handle.net/11603/11242 | |
dc.language.iso | en_US | en_US |
dc.publisher | Association for the Advancement of Artificial Intelligence (AAAI). | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author. | |
dc.subject | Gesture | en_US |
dc.subject | Natural Language | en_US |
dc.subject | Human-Robot Interaction | en_US |
dc.subject | Interactive Robotics and Language Lab | en_US |
dc.title | Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions | en_US |
dc.type | Text | en_US |