Combining World and Interaction Models for Human-Robot Collaborations
MetadataShow full item record
Type of Work7 PAGES
conference papers and proceedings preprints
Citation of Original PublicationCynthia Matuszek, Andrzej Pronobis, Luke Zettlemoyer, Dieter Fox, Combining World and Interaction Models for Human-Robot Collaborations, AAAI 2013 Workshop on Intelligent Robotic Systems, Bellevue, WA, July 2013
RightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author.
As robotic technologies mature, we can imagine an increasing number of applications in which robots could soon prove to be useful in unstructured human environments. Many of those applications require a natural interface between the robot and untrained human users or are possible only in a human-robot collaborative scenario. In this paper, we study an example of such scenario in which a visually impaired person and a robotic “guide” collaborate in an unfamiliar environment. We then analyze how the scenario can be realized through language- and gesture-based human-robot interaction, combined with semantic spatial understanding and reasoning, and propose an integration of semantic world model with language and gesture models for several collaboration modes. We believe that this way practical robotic applications can be achieved in human environments with the use of currently available technology.