Improving Safety in Reinforcement Learning Using Model-Based Architectures and Human Intervention
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2019-03-22
Type of Work
Department
Program
Citation of Original Publication
Bharat Prakash, Mohit Khatwani, Nicholas Waytowich, Tinoosh Mohsenin, Improving Safety in Reinforcement Learning Using Model-Based Architectures and Human Intervention, 2019, https://arxiv.org/abs/1903.09328
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
Recent progress in AI and Reinforcement learning has shown
great success in solving complex problems with high dimensional
state spaces. However, most of these successes have
been primarily in simulated environments where failure is of
little or no consequence. Most real-world applications, however,
require training solutions that are safe to operate as
catastrophic failures are inadmissible especially when there
is human interaction involved. Currently, Safe RL systems
use human oversight during training and exploration in order
to make sure the RL agent does not go into a catastrophic
state. These methods require a large amount of human labor
and it is very difficult to scale up. We present a hybrid
method for reducing the human intervention time by combining
model-based approaches and training a supervised learner
to improve sample efficiency while also ensuring safety. We
evaluate these methods on various grid-world environments
using both standard and visual representations and show that
our approach achieves better performance in terms of sample
efficiency, number of catastrophic states reached as well as
overall task performance compared to traditional model-free
approaches.