Learning to Understand Non-Categorical Physical Language for Human Robot Interactions
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Luke E. Richards and Cynthia Matuszek. “Learning to Understand Non-Categorical Physical Language for Human Robot Interactions.” In Proceedings of the RSS 2019 workshop on AI and Its Alternatives in Assistive and Collaborative Robotics (RSS: AI+ACR), http://iral.cs.umbc.edu/Pubs/RichardsMatuszekRSSws2019.pdf
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Subjects
Abstract
Learning the meaning of language with respect to the physical world in which a robot operates is a necessary step for shared autonomy systems in which natural language is part of a user-specific, customizable interface. We propose a learning system in which language is grounded in visual percepts without pre-defined category constraints by combining CNNbased visual identification with natural language labels, moving towards making it possible for people to use language as a highlevel control system for low-level world interactions, allowing a system to operate on shared visual/linguistic embeddings. We evaluate the efficacy of this learning by evaluating against a wellknown object dataset, and report preliminary results that outline the feasibility of pursuing a visual feature approach to domainfree language understanding.