Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies

Author/Creator ORCID





Citation of Original Publication

M. Murnane, M. Breitmeyer, C. Matuszek and D. Engel, "Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 1092-1093. doi: 10.1109/VR.2019.8798186. URL:


This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2019 IEEE.


Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, which presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments which provide interesting, varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible data set which other labs can recreate using commodity VR hardware. The authors demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g. view-port) and speech of humans interacting with the robot in a prescribed scenario