Guiding Inference with Policy Search Reinforcement Learning

Author/Creator ORCID

Date

2007-05

Department

Program

Citation of Original Publication

Matthew E. Taylor, Cynthia Matuszek, Pace Reagan Smith, Michael Witbrock, Guiding Inference with Policy Search Reinforcement Learning, The 20th International FLAIRS Conference (FLAIRS-07), Key West, Forida, May 2007.

Rights

This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author.

Abstract

Symbolic reasoning is a well understood and effective approach to handling reasoning over formally represented knowledge; however, simple symbolic inference systems necessarily slow as complexity and ground facts grow. As automated approaches to ontology-building become more prevalent and sophisticated, knowledge base systems become larger and more complex, necessitating techniques for faster inference. This work uses reinforcement learning, a statistical machine learning technique, to learn control laws which guide inference. We implement our learning method in ResearchCyc, a very large knowledge base with millions of assertions. A large set of test queries, some of which require tens of thousands of inference steps to answer, can be answered faster after training over an independent set of training queries. Furthermore, this learned inference module outperforms ResearchCyc's integrated inference module, a module that has been hand-tuned with considerable effort.