Guiding Safe Reinforcement Learning Policies Using Structured Language Constraints
Loading...
Permanent Link
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Prakash Bharat, Waytowich Nicholas, Ganesan Ashwinkumar, Oates Tim, Mohsenin Tinoosh, Guiding Safe Reinforcement Learning Policies Using Structured Language Constraints, http://eehpc.csee.umbc.edu/publications/pdf/2020/AAAI_RL_Workshop.pdf
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Public Domain Mark 1.0
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain Mark 1.0
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Abstract
Reinforcement learning (RL) has shown success in solving complex sequential decision making tasks when a well defined reward function is available. For agents acting in the real world, these reward functions need to be designed very carefully to make sure the agents act in a safe manner . This is especially true when these agents need to interact with humans and perform tasks in such settings. However, hand-crafting such a reward function often requires specialized expertise and quickly becomes difficult to scale with task-complexity .This leads to the long-standing problem in reinforcement learning known as reward sparsity where sparse or poorly specified reward functions slow down the learning process and lead to sub-optimal policies and unsafe behaviors. To make matters worse, reward functions often need to be adjusted or re-specified for each task the RL agent must learn. On the other-hand, it’s relatively easy for people to specify using language what you should or shouldn’t do in order to do a task safely. Inspired by this, we propose a framework to train RL agents conditioned on constraints that are in the form of structured language, thus reducing effort to design and integrate specialized rewards into the environment. In our experiments,we show that this method can be used to ground the language to behaviors and enable the agent to solve tasks while following the constraints .We also show how the agent can transfer these skills to other tasks.