Guiding Inference with Policy Search Reinforcement Learning
Links to Fileshttps://aaai.org/Library/FLAIRS/2007/flairs07-027.php
MetadataShow full item record
Type of Work6 PAGES
conference papers and proceedings preprints
Citation of Original PublicationMatthew E. Taylor, Cynthia Matuszek, Pace Reagan Smith, Michael Witbrock, Guiding Inference with Policy Search Reinforcement Learning, The 20th International FLAIRS Conference (FLAIRS-07), Key West, Forida, May 2007.
RightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author.
Logical reasoning systems
Interactive Robotics and Language Lab
Symbolic reasoning is a well understood and effective approach to handling reasoning over formally represented knowledge; however, simple symbolic inference systems necessarily slow as complexity and ground facts grow. As automated approaches to ontology-building become more prevalent and sophisticated, knowledge base systems become larger and more complex, necessitating techniques for faster inference. This work uses reinforcement learning, a statistical machine learning technique, to learn control laws which guide inference. We implement our learning method in ResearchCyc, a very large knowledge base with millions of assertions. A large set of test queries, some of which require tens of thousands of inference steps to answer, can be answered faster after training over an independent set of training queries. Furthermore, this learned inference module outperforms ResearchCyc's integrated inference module, a module that has been hand-tuned with considerable effort.