Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety

dc.contributor.authorGaur, Manas
dc.contributor.authorSheth, Amit
dc.date.accessioned2024-02-27T19:08:12Z
dc.date.available2024-02-27T19:08:12Z
dc.date.issued2024-02-14
dc.description.abstractExplainability and Safety engender Trust. These require a model to exhibit consistency and reliability. To achieve these, it is necessary to use and analyze data and knowledge with statistical and symbolic AI methods relevant to the AI application - neither alone will do. Consequently, we argue and seek to demonstrate that the NeuroSymbolic AI approach is better suited for making AI a trusted AI system. We present the CREST framework that shows how Consistency, Reliability, user-level Explainability, and Safety are built on NeuroSymbolic methods that use data and knowledge to support requirements for critical applications such as health and well-being. This article focuses on Large Language Models (LLMs) as the chosen AI system within the CREST framework. LLMs have garnered substantial attention from researchers due to their versatility in handling a broad array of natural language processing (NLP) scenarios. For example, ChatGPT and Google's MedPaLM have emerged as highly promising platforms for providing information in general and health-related queries, respectively. Nevertheless, these models remain black boxes despite incorporating human feedback and instruction-guided tuning. For instance, ChatGPT can generate unsafe responses despite instituting safety guardrails. CREST presents a plausible approach harnessing procedural and graph-based knowledge within a NeuroSymbolic framework to shed light on the challenges associated with LLMs.
dc.description.sponsorshipWe acknowledge partial support from the NSF EAGER award #2335967 and the UMBC Summer Faculty Fellowship. Any opinions, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or UMBC.
dc.description.urihttps://onlinelibrary.wiley.com/doi/10.1002/aaai.12149
dc.format.extent17 pages
dc.genrejournal articles
dc.identifierdoi:10.13016/m2t964-fbhm
dc.identifier.citationGaur, Manas, and Amit Sheth. “Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety.” AI Magazine 45, no. 1 (2024): 139–55. https://doi.org/10.1002/aaai.12149.
dc.identifier.urihttps://doi.org/10.1002/aaai.12149
dc.identifier.urihttp://hdl.handle.net/11603/31716
dc.language.isoen_US
dc.publisherWiley
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsCC BY 4.0 DEED Attribution 4.0 Internationalen
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleBuilding Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AI Magazine - 2024 - Gaur - Building trustworthy NeuroSymbolic AI Systems Consistency reliability explainability and.pdf
Size:
2.69 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: