A Joint Model of Language and Perception for Grounded Attribute Learning
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2012-06-27
Type of Work
Department
Program
Citation of Original Publication
Cynthia Matuszek, et.al, A Joint Model of Language and Perception for Grounded Attribute Learning, 29th International Conference on Machine Learning (ICML 2012), https://arxiv.org/abs/1206.6423
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
As robots become more ubiquitous and capable, it becomes ever more important to enable untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to perception and actuation in the physical world. In this paper, we present an approach for joint learning of language and perception models for grounded attribute induction. Our perception model includes attribute classifiers, for example to detect object color and shape, and the language model is based on a probabilistic categorial grammar that enables the construction of rich, compositional meaning representations. The approach is evaluated on the task of interpreting sentences that describe sets of objects in a physical workspace. We demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes.