Learning from human-robot interactions in modeled scenes
dc.contributor.author | Murnane, Mark | |
dc.contributor.author | Breitmeyer, Max | |
dc.contributor.author | Ferraro, Francis | |
dc.contributor.author | Matuszek, Cynthia | |
dc.contributor.author | Engel, Don | |
dc.date.accessioned | 2021-04-30T15:20:27Z | |
dc.date.available | 2021-04-30T15:20:27Z | |
dc.description | SIGGRAPH '19: ACM SIGGRAPH 2019 Posters, July 2019, Article No.1 Pages 1–2 | en_US |
dc.description.abstract | There is increasing interest in using robots in simulation to understand and improve human-robot interaction (HRI). At the same time, the use of simulated settings to gather training data promises to help address a major data bottleneck in allowing robots to take advantage of powerful machine learning approaches. In this paper, we describe a prototype system that combines the robot operating system (ROS), the simulator Gazebo, and the Unity game engine to create human-robot interaction scenarios. A person can engage with the scenario using a monitor wall, allowing simultaneous collection of realistic sensor data and traces of human actions. | en_US |
dc.description.sponsorship | This material is based upon work supported by the National Science Foundation under Grants No. 1531491 and 1428204. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Support was also provided for this work by the Next Century Corporation. | en_US |
dc.description.uri | https://dl.acm.org/doi/abs/10.1145/3306214.3338546 | en_US |
dc.format.extent | 2 pages | en_US |
dc.genre | conference papers and proceedings | en_US |
dc.identifier | doi:10.13016/m2wosn-jvmw | |
dc.identifier.citation | Mark Murnane, Max Breitmeyer, Francis Ferraro, Cynthia Matuszek, and Don Engel. 2019. Learning from human-robot interactions in modeled scenes. In ACM SIGGRAPH 2019 Posters (SIGGRAPH '19). Association for Computing Machinery, New York, NY, USA, Article 1, 1–2. DOI:https://doi.org/10.1145/3306214.3338546 | en_US |
dc.identifier.uri | https://doi.org/10.1145/3306214.3338546 | |
dc.identifier.uri | http://hdl.handle.net/11603/21408 | |
dc.language.iso | en_US | en_US |
dc.publisher | ACM | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Office for the Vice President of Research | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
dc.subject | Robotics | en_US |
dc.subject | Virtual Reality | en_US |
dc.subject | Machine Learning | en_US |
dc.title | Learning from human-robot interactions in modeled scenes | en_US |
dc.type | Text | en_US |
Files
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 2.56 KB
- Format:
- Item-specific license agreed upon to submission
- Description: