Neurosymbolic Reinforcement Learning and Planning: A Survey

dc.contributor.authorAcharya, Kamal
dc.contributor.authorRaza, Waleed
dc.contributor.authorDourado, Carlos
dc.contributor.authorVelasquez, Alvaro
dc.contributor.authorSong, Houbing
dc.date.accessioned2023-09-22T16:22:03Z
dc.date.available2023-09-22T16:22:03Z
dc.date.issued2023-09-04
dc.description.abstractThe area of Neurosymbolic Artificial Intelligence (Neurosymbolic AI) is rapidly developing and has become a popular research topic, encompassing sub-fields such as Neurosymbolic Deep Learning (Neurosymbolic DL) and Neurosymbolic Reinforcement Learning (Neurosymbolic RL). Compared to traditional learning methods, Neurosymbolic AI offers significant advantages by simplifying complexity and providing transparency and explainability. Reinforcement Learning(RL), a long-standing Artificial Intelligence(AI) concept that mimics human behavior using rewards and punishment, is a fundamental component of Neurosymbolic RL, a recent integration of the two fields that has yielded promising results. The aim of this paper is to contribute to the emerging field of Neurosymbolic RL by conducting a literature survey. Our evaluation focuses on the three components that constitute Neurosymbolic RL: neural, symbolic, and RL. We categorize works based on the role played by the neural and symbolic parts in RL, into three taxonomies: Learning for Reasoning, Reasoning for Learning and Learning-Reasoning. These categories are further divided into sub-categories based on their applications. Furthermore, we analyze the RL components of each research work, including the state space, action space, policy module, and RL algorithm. Additionally, we identify research opportunities and challenges in various applications within this dynamic field.en
dc.description.sponsorshipThis work was supported in part by the U.S. National Science Foundation under Grant No. 2309760 and Grant No. 2317117.en
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/10238788en
dc.format.extent14 pagesen
dc.genrejournal articlesen
dc.genrepreprintsen
dc.identifierdoi:10.13016/m2kdsk-oth0
dc.identifier.citationK. Acharya, W. Raza, C. Dourado, A. Velasquez and H. H. Song, "Neurosymbolic Reinforcement Learning and Planning: A Survey," in IEEE Transactions on Artificial Intelligence, doi: 10.1109/TAI.2023.3311428.en
dc.identifier.urihttps://doi.org/10.1109/TAI.2023.3311428
dc.identifier.urihttp://hdl.handle.net/11603/29839
dc.language.isoenen
dc.publisherIEEEen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rights© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en
dc.titleNeurosymbolic Reinforcement Learning and Planning: A Surveyen
dc.typeTexten
dcterms.creatorhttps://orcid.org/0000-0002-9712-0265en
dcterms.creatorhttps://orcid.org/0000-0003-2631-9223en

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Neurosymbolic_Reinforcement_Learning_and_Planning_A_Survey.pdf
Size:
1.62 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: