On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms
dc.contributor.author | Richards, Luke E. | |
dc.contributor.author | Yaros, Jessie | |
dc.contributor.author | Babcock, Jasen | |
dc.contributor.author | Ly, Coung | |
dc.contributor.author | Cosbey, Robin | |
dc.contributor.author | Doster, Timothy | |
dc.contributor.author | Matuszek, Cynthia | |
dc.date.accessioned | 2025-04-01T14:55:30Z | |
dc.date.available | 2025-04-01T14:55:30Z | |
dc.date.issued | 2025-02-13 | |
dc.description.abstract | To create usable and deployable Artificial Intelligence (AI) systems, there requires a level of assurance in performance under many different conditions. Many times, deployed machine learning systems will require more classic logic and reasoning performed through neurosymbolic programs jointly with artificial neural network sensing. While many prior works have examined the assurance of a single component of the system solely with either the neural network alone or entire enterprise systems, very few works have examined the assurance of integrated neurosymbolic systems. Within this work, we assess the assurance of end-to-end fully differentiable neurosymbolic systems that are an emerging method to create data-efficient and more interpretable models. We perform this investigation using Scallop, an end-to-end neurosymbolic library, across classification and reasoning tasks in both the image and audio domains. We assess assurance across adversarial robustness, calibration, user performance parity, and interpretability of solutions for catching misaligned solutions. We find end-to-end neurosymbolic methods present unique opportunities for assurance beyond their data efficiency through our empirical results but not across the board. We find that this class of neurosymbolic models has higher assurance in cases where arithmetic operations are defined and where there is high dimensionality to the input space, where fully neural counterparts struggle to learn robust reasoning operations. We identify the relationship between neurosymbolic models' interpretability to catch shortcuts that later result in increased adversarial vulnerability despite performance parity. Finally, we find that the promise of data efficiency is typically only in the case of class imbalanced reasoning problems. | |
dc.description.sponsorship | This work was conducted under the Laboratory Directed Research and Development (LDRD) Program at at Pacific Northwest National Laboratory (PNNL), a multiprogram National Laboratory operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC05-76RL01830. This article has been cleared by PNNL for public release as PNNL-SA-208413. | |
dc.description.uri | http://arxiv.org/abs/2502.08932 | |
dc.format.extent | 17 pages | |
dc.genre | journal articles | |
dc.genre | preprints | |
dc.identifier | doi:10.13016/m2cmlv-oxpa | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2502.08932 | |
dc.identifier.uri | http://hdl.handle.net/11603/37903 | |
dc.language.iso | en_US | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
dc.rights | This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law. | |
dc.rights | Public Domain | |
dc.rights.uri | https://creativecommons.org/publicdomain/mark/1.0/ | |
dc.subject | UMBC Interactive Robotics and Language Lab | |
dc.subject | Computer Science - Computer Vision and Pattern Recognition | |
dc.subject | Computer Science - Artificial Intelligence | |
dc.title | On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0003-1383-8120 |
Files
Original bundle
1 - 1 of 1