Improving Safety in Reinforcement Learning Using Model-Based Architectures and Human Intervention

dc.contributor.authorPrakash, Bharat
dc.contributor.authorKhatwani, Mohit
dc.contributor.authorWaytowich, Nicholas
dc.contributor.authorMohsenin, Tinoosh
dc.date.accessioned2019-04-19T19:24:46Z
dc.date.available2019-04-19T19:24:46Z
dc.date.issued2019-03-22
dc.description.abstractRecent progress in AI and Reinforcement learning has shown great success in solving complex problems with high dimensional state spaces. However, most of these successes have been primarily in simulated environments where failure is of little or no consequence. Most real-world applications, however, require training solutions that are safe to operate as catastrophic failures are inadmissible especially when there is human interaction involved. Currently, Safe RL systems use human oversight during training and exploration in order to make sure the RL agent does not go into a catastrophic state. These methods require a large amount of human labor and it is very difficult to scale up. We present a hybrid method for reducing the human intervention time by combining model-based approaches and training a supervised learner to improve sample efficiency while also ensuring safety. We evaluate these methods on various grid-world environments using both standard and visual representations and show that our approach achieves better performance in terms of sample efficiency, number of catastrophic states reached as well as overall task performance compared to traditional model-free approaches.en_US
dc.description.sponsorshipThis work is supported by U.S. Army Research Laboratory under Cooperative Agreement Number W911NF-10-2-0022en_US
dc.description.urihttps://arxiv.org/abs/1903.09328en_US
dc.format.extent7 pagesen_US
dc.genrejournal articles preprintsen_US
dc.identifierdoi:10.13016/m2auyy-udku
dc.identifier.citationBharat Prakash, Mohit Khatwani, Nicholas Waytowich, Tinoosh Mohsenin, Improving Safety in Reinforcement Learning Using Model-Based Architectures and Human Intervention, 2019, https://arxiv.org/abs/1903.09328en_US
dc.identifier.urihttp://hdl.handle.net/11603/13476
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectartificial intelligenceen_US
dc.subjectReinforcement Learning (RL)en_US
dc.subjecthuman interventionen_US
dc.titleImproving Safety in Reinforcement Learning Using Model-Based Architectures and Human Interventionen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
1903.09328.pdf
Size:
626.92 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: