A Spoken Language Dataset of Descriptions for Speech-Based Grounded Language Learning

dc.contributor.authorKebe, Gaoussou Youssouf
dc.contributor.authorHiggins, Padraig
dc.contributor.authorJenkins, Patrick
dc.contributor.authorDarvish, Kasra
dc.contributor.authorSachdeva, Rishabh
dc.contributor.authorBarron, Ryan
dc.contributor.authorWinder, John
dc.contributor.authorEngel, Donald
dc.contributor.authorRaff, Edward
dc.contributor.authorFerraro, Francis
dc.contributor.authorMatuszek, Cynthia
dc.date.accessioned2021-06-25T22:16:40Z
dc.date.available2021-06-25T22:16:40Z
dc.date.issued2021-06-08
dc.description35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarksen_US
dc.description.abstractGrounded language acquisition is a major area of research combining aspects of natural language processing, computer vision, and signal processing, compounded by domain issues requiring sample efficiency and other deployment constraints. In this work, we present a multimodal dataset of RGB+depth objects with spoken as well as textual descriptions. We analyze the differences between the two types of descriptive language and our experiments demonstrate that the different modalities affect learning. This will enable researchers studying the intersection of robotics, NLP, and HCI to better investigate how the multiple modalities of image, depth, text, speech, and transcription interact, as well as how differences in the vernacular of these modalities impact results.en_US
dc.description.sponsorshipThis material is based in part upon work supported by the National Science Foundation under Grant Nos. 1940931 and 1637937. Some experiments were conducted on the UMBC High-performance computing facility, funded by the National Science Foundation under Grant Nos. 1940931 and 2024878. This material is also based on research that is in part supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003. The U.S.Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either express or implied, of the Air Force Research Laboratory (AFRL), DARPA, or the U.S. Government.en_US
dc.description.urihttps://neurips.cc/virtual/2021/poster/22767
dc.format.extent11 slidesen_US
dc.genrevideo recordings
dc.identifierdoi:10.13016/m2bmhe-tmzc
dc.identifier.citationKebe, Gaoussou Youssouf et al.; A Spoken Language Dataset of Descriptions for Speech-Based Grounded Language Learning; NeurIPS 2021 Track Datasets and Benchmarks Round1 Submission, 8 June, 2021; https://openreview.net/forum?id=Yx9jT3fkBaDen_US
dc.identifier.urihttp://hdl.handle.net/11603/21838
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Office for the Vice President of Research
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectgrounded language acquisitionen_US
dc.subjectspeech processingen_US
dc.subjectcomputer visionen_US
dc.subjectnatural language processingen_US
dc.titleA Spoken Language Dataset of Descriptions for Speech-Based Grounded Language Learningen_US
dc.typeTexten_US
dc.typeMoving Image

Files

License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: