LLM-Supported Safety Annotation in High-Risk Environments
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2025-02-13
Type of Work
Department
Program
Citation of Original Publication
Eskandari, Mohammad, Murali Indukuri, Stephanie M. Lukin, and Cynthia Matuszek. "LLM-Supported Safety Annotation in High-Risk Environments," 2025. https://openreview.net/forum?id=Ewg3WsMBRv.
Rights
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain
Public Domain
Abstract
This paper explores how large language model-based robots assist in detecting anomalies in high-risk environments and how users perceive their usability and reliability in a safe virtual environment. We present a system where a robot using a state-of-the-art vision-language model autonomously annotates potential hazards in a virtual world. The system provides users with contextual safety information via a VR interface. We conducted a user study to evaluate the system's performance across metrics such as trust, user satisfaction, and efficiency. Results demonstrated high user satisfaction and clear hazard communication, while trust remained moderate.