Sharing Learned Models Between Heterogeneous Robots: An Image-Driven Interpretation

Author/Creator

Author/Creator ORCID

Date

2018-01-01

Department

Computer Science and Electrical Engineering

Program

Computer Science

Citation of Original Publication

Rights

Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

With the evolution of robotics to produce more affordable and proficient robots, it has become crucial for robots to get acquainted with their environment and tasks quickly. This requires training classifiers to identify objects denoted by natural language, a type of grounded language acquisition and visual perception. The current approaches require extensive training data gathered from humans for robots to learn the contextual models. For robots to work collaboratively, every robot must understand the task requirement and its corresponding environment. Teaching every robot these tasks separately would multiply human interaction with robots. Research in `transfer learning' is gaining momentum to avoid the repetitive training task and minimize human-robot interaction. With the advancement of personal assistance in elderly care and teaching domains, where the learned robot models are environment-specific, transferring the learned model to other robots with minimum loss of accuracy is crucial. Homogeneous transferred learning is easy as compared to transfer learning in heterogeneous robot environment with different perceptual sensors. We propose the `chained learning approach' to transfer data between robots with different perceptual capabilities. These differences in sensory processing and representations may lead to a gradual drop in transfer learning accuracy. We conduct experiments for co-located robots with similar sensory ability, with qualitatively different camera sensors, and for non-co-located robots to test our learning approach. A comparative study of cutting-edge feature extraction algorithms helps us build an efficient pipeline for optimal knowledge transfer. Our preliminary experiments lay a foundation for efficient transfer learning in a heterogeneous robot environment while introducing domain adaptation as a potential research option for grounded language transfer.