Guiding Safe Reinforcement Learning Policies Using Structured Language Constraints

dc.contributor.authorPrakash, Bharat
dc.contributor.authorWaytowich, Nicholas
dc.contributor.authorGanesan, Ashwinkumar
dc.contributor.authorOates, Tim
dc.contributor.authorMohsenin, Tinoosh
dc.date.accessioned2020-03-04T15:39:51Z
dc.date.available2020-03-04T15:39:51Z
dc.description.abstractReinforcement learning (RL) has shown success in solving complex sequential decision making tasks when a well defined reward function is available. For agents acting in the real world, these reward functions need to be designed very carefully to make sure the agents act in a safe manner . This is especially true when these agents need to interact with humans and perform tasks in such settings. However, hand-crafting such a reward function often requires specialized expertise and quickly becomes difficult to scale with task-complexity .This leads to the long-standing problem in reinforcement learning known as reward sparsity where sparse or poorly specified reward functions slow down the learning process and lead to sub-optimal policies and unsafe behaviors. To make matters worse, reward functions often need to be adjusted or re-specified for each task the RL agent must learn. On the other-hand, it’s relatively easy for people to specify using language what you should or shouldn’t do in order to do a task safely. Inspired by this, we propose a framework to train RL agents conditioned on constraints that are in the form of structured language, thus reducing effort to design and integrate specialized rewards into the environment. In our experiments,we show that this method can be used to ground the language to behaviors and enable the agent to solve tasks while following the constraints .We also show how the agent can transfer these skills to other tasks.en_US
dc.description.sponsorshipThis project was sponsored by the U.S. Army Research Laboratory under Cooperative Agreement Number W911NF10-2-0022 .The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes not withstanding any copyright notation herein.We also thank Sunil Gandhi for useful discussions during the course of this project.en_US
dc.description.urihttp://eehpc.csee.umbc.edu/publications/pdf/2020/AAAI_RL_Workshop.pdfen_US
dc.format.extent9 pagesen_US
dc.genreconference papers and proceedings preprintsen_US
dc.identifierdoi:10.13016/m2yg3c-gxs4
dc.identifier.citationPrakash Bharat, Waytowich Nicholas, Ganesan Ashwinkumar, Oates Tim, Mohsenin Tinoosh, Guiding Safe Reinforcement Learning Policies Using Structured Language Constraints, http://eehpc.csee.umbc.edu/publications/pdf/2020/AAAI_RL_Workshop.pdfen_US
dc.identifier.urihttp://hdl.handle.net/11603/17463
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsPublic Domain Mark 1.0*
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
dc.rights.urihttp://creativecommons.org/publicdomain/mark/1.0/*
dc.subjectEnergy Efficient High Performance Computing Lab
dc.titleGuiding Safe Reinforcement Learning Policies Using Structured Language Constraintsen_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AAAI_RL_Workshop.pdf
Size:
502.25 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: