Sharing Learned Models Between Heterogeneous Robots: An Image-Driven Interpretation

dc.contributor.advisorMatuszek, Cynthia
dc.contributor.authorPotnis, Isha Rahul
dc.contributor.departmentComputer Science and Electrical Engineering
dc.contributor.programComputer Science
dc.date.accessioned2021-01-29T18:12:34Z
dc.date.available2021-01-29T18:12:34Z
dc.date.issued2018-01-01
dc.description.abstractWith the evolution of robotics to produce more affordable and proficient robots, it has become crucial for robots to get acquainted with their environment and tasks quickly. This requires training classifiers to identify objects denoted by natural language, a type of grounded language acquisition and visual perception. The current approaches require extensive training data gathered from humans for robots to learn the contextual models. For robots to work collaboratively, every robot must understand the task requirement and its corresponding environment. Teaching every robot these tasks separately would multiply human interaction with robots. Research in `transfer learning' is gaining momentum to avoid the repetitive training task and minimize human-robot interaction. With the advancement of personal assistance in elderly care and teaching domains, where the learned robot models are environment-specific, transferring the learned model to other robots with minimum loss of accuracy is crucial. Homogeneous transferred learning is easy as compared to transfer learning in heterogeneous robot environment with different perceptual sensors. We propose the `chained learning approach' to transfer data between robots with different perceptual capabilities. These differences in sensory processing and representations may lead to a gradual drop in transfer learning accuracy. We conduct experiments for co-located robots with similar sensory ability, with qualitatively different camera sensors, and for non-co-located robots to test our learning approach. A comparative study of cutting-edge feature extraction algorithms helps us build an efficient pipeline for optimal knowledge transfer. Our preliminary experiments lay a foundation for efficient transfer learning in a heterogeneous robot environment while introducing domain adaptation as a potential research option for grounded language transfer.
dc.formatapplication:pdf
dc.genretheses
dc.identifierdoi:10.13016/m2trup-gnnr
dc.identifier.other11901
dc.identifier.urihttp://hdl.handle.net/11603/20722
dc.languageen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Theses and Dissertations Collection
dc.relation.ispartofUMBC Graduate School Collection
dc.relation.ispartofUMBC Student Collection
dc.sourceOriginal File Name: Potnis_umbc_0434M_11901.pdf
dc.subjectfeature extraction
dc.subjectgrounded language acquisition
dc.subjectheterogeneous
dc.subjectHierarchical matching pursuit
dc.subjectSIFT
dc.subjectvisual classification
dc.titleSharing Learned Models Between Heterogeneous Robots: An Image-Driven Interpretation
dc.typeText
dcterms.accessRightsDistribution Rights granted to UMBC by the author.
dcterms.accessRightsAccess limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
dcterms.accessRightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Potnis_umbc_0434M_11901.pdf
Size:
2.98 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
PotnisISharing_Open.pdf
Size:
44.6 KB
Format:
Adobe Portable Document Format
Description: