A Joint Model of Language and Perception for Grounded Attribute Learning
dc.contributor.author | Matuszek, Cynthia | |
dc.contributor.author | FitzGerald, Nicholas | |
dc.contributor.author | Zettlemoyer, Luke | |
dc.contributor.author | Bo, Liefeng | |
dc.contributor.author | Fox, Dieter | |
dc.date.accessioned | 2019-07-09T15:41:20Z | |
dc.date.available | 2019-07-09T15:41:20Z | |
dc.date.issued | 2012-06-27 | |
dc.description.abstract | As robots become more ubiquitous and capable, it becomes ever more important to enable untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to perception and actuation in the physical world. In this paper, we present an approach for joint learning of language and perception models for grounded attribute induction. Our perception model includes attribute classifiers, for example to detect object color and shape, and the language model is based on a probabilistic categorial grammar that enables the construction of rich, compositional meaning representations. The approach is evaluated on the task of interpreting sentences that describe sets of objects in a physical workspace. We demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes. | en_US |
dc.description.sponsorship | This work was funded in part by the Intel Science and Technology Center for Pervasive Computing, the Robotics Consortium sponsored by the U.S. Army Research Laboratory under the Collaborative Technology Alliance Program (W911NF-10-2-0016), and NSF grant IIS-1115966. | en_US |
dc.description.uri | https://arxiv.org/abs/1206.6423 | en_US |
dc.format.extent | 8 pages | en_US |
dc.genre | conference papers and proceedings preprints | en_US |
dc.identifier | doi:10.13016/m2ehu6-1m90 | |
dc.identifier.citation | Cynthia Matuszek, et.al, A Joint Model of Language and Perception for Grounded Attribute Learning, 29th International Conference on Machine Learning (ICML 2012), https://arxiv.org/abs/1206.6423 | en_US |
dc.identifier.uri | http://hdl.handle.net/11603/14362 | |
dc.language.iso | en_US | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
dc.subject | grounded attribute learning | en_US |
dc.subject | probabilistic categorial grammar | en_US |
dc.subject | latent-variable concept induction in physical grounded scenes | en_US |
dc.subject | Interactive Robotics and Language Lab | |
dc.title | A Joint Model of Language and Perception for Grounded Attribute Learning | en_US |
dc.type | Text | en_US |