Photogrammetry and VR for Comparing 2D and Immersive Linguistic Data Collection (Student Abstract)
Loading...
Links to Files
Collections
UMBC Center for Space Sciences and Technology (CSST) / Center for Research and Exploration in Space Sciences & Technology II (CRSST II)
UMBC Computer Science and Electrical Engineering Department
UMBC Faculty Collection
UMBC Office for the Vice President of Research & Creative Achievement (ORCA)
UMBC Physics Department
UMBC Student Collection
UMBC Computer Science and Electrical Engineering Department
UMBC Faculty Collection
UMBC Office for the Vice President of Research & Creative Achievement (ORCA)
UMBC Physics Department
UMBC Student Collection
Author/Creator
Date
2023-06-26
Type of Work
Department
Program
Citation of Original Publication
Rubinstein, Jacob, Cynthia Matuszek, and Don Engel. “Photogrammetry and VR for Comparing 2D and Immersive Linguistic Data Collection (Student Abstract).” Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16312–13. https://doi.org/10.1609/aaai.v37i13.27016.
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
The overarching goal of this work is to enable the collection of language describing a wide variety of objects viewed in virtual reality. We aim to create full 3D models from a small number of ‘keyframe’ images of objects found in the publicly available Grounded Language Dataset (GoLD) using photogrammetry. We will then collect linguistic descriptions by placing our models in virtual reality and having volunteers describe them. To evaluate the impact of virtual reality immersion on linguistic descriptions of the objects, we intend to apply contrastive learning to perform grounded language learning, then compare the descriptions collected from images (in GoLD) versus our models.