Language and Gesture in Virtual Reality: Is a Gesture Worth 1000 Words?
| dc.contributor.author | Higgins, Padraig | |
| dc.contributor.author | Hayes, Cory J. | |
| dc.contributor.author | Lukin, Stephanie | |
| dc.contributor.author | Matuszek, Cynthia | |
| dc.date.accessioned | 2026-01-06T20:51:55Z | |
| dc.date.issued | 2025-11-23 | |
| dc.description | 2025 AAAI Fall Symposiu, November 6-8, 2025, Arlington, VA, USA | |
| dc.description.abstract | Robots are increasingly incorporating multimodalinformation and human signals to resolve ambiguity inembodied human-robot interaction. Harnessing signals suchas gestures may expedite robot exploration in large,outdoor urban environments for supporting disaster recoveryoperations, where speech may be unclear due to noise or thechallenges of a dynamic and dangerous environment. Despitethis potential, capturing human gesture and properlygrounding it to crowded, outdoor environments remains achallenge. In this work, we propose a method to model humangesture and ground it to spoken language instructions givento a robot for execution in large spaces. We implement ourmethod in virtual reality to develop a workflow for fasterfuture data collection. We present a series of proposedexperiments that compare a language-only baseline to ourproposed language supplemented by gesture approach, anddiscuss how our approach has the potential to reinforce thehuman’s intent and detect discrepancies in gesture andspoken instructions in these large and crowded environments. | |
| dc.description.sponsorship | This work was sponsored by the National Science Founda-tion, award numbers 2435593 and 2346667 | |
| dc.description.uri | https://ojs.aaai.org/index.php/AAAI-SS/article/view/36947 | |
| dc.format.extent | 5 pages | |
| dc.genre | conference papers and proceedings | |
| dc.identifier | doi:10.13016/m2lb7f-xzr9 | |
| dc.identifier.citation | Higgins, Padraig, Cory J. Hayes, Stephanie Lukin, and Cynthia Matuszek. “Language and Gesture in Virtual Reality: Is a Gesture Worth 1000 Words?” Proceedings of the AAAI Symposium Series 7, no. 1 (2025): 658–62. https://doi.org/10.1609/aaaiss.v7i1.36947. | |
| dc.identifier.uri | https://doi.org/10.1609/aaaiss.v7i1.36947 | |
| dc.identifier.uri | http://hdl.handle.net/11603/41391 | |
| dc.language.iso | en | |
| dc.publisher | AAAI | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Student Collection | |
| dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.rights | This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law. | |
| dc.rights | Public Domain | |
| dc.rights.uri | https://creativecommons.org/publicdomain/mark/1.0/ | |
| dc.subject | UMBC Interactive Robotics and Language Lab | |
| dc.subject | UMBC Ebiquity Research Group | |
| dc.title | Language and Gesture in Virtual Reality: Is a Gesture Worth 1000 Words? | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0003-1383-8120 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 36947ArticleText410241220251123(1).pdf
- Size:
- 3.04 MB
- Format:
- Adobe Portable Document Format
