Matuszek, CynthiaPronobis, AndrzejZettlemoyer, LukeFox, Dieter2018-09-052018-09-052013-07Cynthia Matuszek, Andrzej Pronobis, Luke Zettlemoyer, Dieter Fox, Combining World and Interaction Models for Human-Robot Collaborations, AAAI 2013 Workshop on Intelligent Robotic Systems, Bellevue, WA, July 2013http://hdl.handle.net/11603/11243AAAI 2013 Workshop on Intelligent Robotic Systems, Bellevue, WA, July 2013As robotic technologies mature, we can imagine an increasing number of applications in which robots could soon prove to be useful in unstructured human environments. Many of those applications require a natural interface between the robot and untrained human users or are possible only in a human-robot collaborative scenario. In this paper, we study an example of such scenario in which a visually impaired person and a robotic “guide” collaborate in an unfamiliar environment. We then analyze how the scenario can be realized through language- and gesture-based human-robot interaction, combined with semantic spatial understanding and reasoning, and propose an integration of semantic world model with language and gesture models for several collaboration modes. We believe that this way practical robotic applications can be achieved in human environments with the use of currently available technology.7 PAGESen-USThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author.grounded language acquisitionconceptual mapInteractive Robotics and Language LabCombining World and Interaction Models for Human-Robot CollaborationsText