Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions

dc.contributor.authorMatuszek, Cynthia
dc.contributor.authorBo, Liefeng
dc.contributor.authorZettlemoyer, Luke
dc.contributor.authorFox, Dieter
dc.date.accessioned2018-09-05T20:57:39Z
dc.date.available2018-09-05T20:57:39Z
dc.date.issued2014-07
dc.descriptionTwenty-Eighth AAAI Conference on Artificial Intelligence, 2014.en_US
dc.description.abstractAs robots become more ubiquitous, it is increasingly important for untrained users to be able to interact with them intuitively. In this work, we investigate how people refer to objects in the world during relatively unstructured communication with robots. We collect a corpus of deictic interactions from users describing objects, which we use to train language and gesture models that allow our robot to determine what objects are being indicated. We introduce a temporal extension to state-of-the-art hierarchical matching pursuit features to support gesture understanding, and demonstrate that combining multiple communication modalities more effectively captures user intent than relying on a single type of input. Finally, we present initial interactions with a robot that uses the learned models to follow commands while continuing to learn from user input.en_US
dc.description.sponsorshipThe work was funded in part by the Intel Science and Technology Center for Pervasive Computing (ISTC-PC), by ARO grant W911NF-12-1-0197, and through collaborative participation in the Robotics Consortium sponsored by the U.S. Army Research Laboratory under the CTA Program (Cooperative Agreement W911NF-10- 2-0016). We also thank Fortiss GmbH and Manuel Giuliani for work on gathering and annotating the data corpus, substantial assistance, and many helpful conversations.en_US
dc.description.urihttps://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8327en_US
dc.format.extent8 PAGESen_US
dc.genreconference papers and proceedings preprintsen_US
dc.identifierdoi:10.13016/M2RN30B52
dc.identifier.citationCynthia Matuszek, Liefeng Bo, Luke Zettlemoyer, Dieter Fox, Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions, Twenty-Eighth AAAI Conference on Artificial Intelligence, 2018, https://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8327en_US
dc.identifier.urihttp://hdl.handle.net/11603/11242
dc.language.isoen_USen_US
dc.publisherAssociation for the Advancement of Artificial Intelligence (AAAI).en_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author.
dc.subjectGestureen_US
dc.subjectNatural Languageen_US
dc.subjectHuman-Robot Interactionen_US
dc.subjectInteractive Robotics and Language Laben_US
dc.titleLearning from Unscripted Deictic Gesture and Language for Human-Robot Interactionsen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MatuszekAAAI2014LangPlusGesture.pdf
Size:
4.08 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.68 KB
Format:
Item-specific license agreed upon to submission
Description: