Targeted Knowledge Infusion To Make Conversational AI Explainable and Safe
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2024-07-15
Type of Work
Department
UMBC Computer Science and Electrical Engineering
Program
Part of Knowledge-infused AI and Inference Lab at UMBC
Citation of Original Publication
Gaur, Manas. “Targeted Knowledge Infusion To Make Conversational AI Explainable and Safe.” Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 15438–15438. https://doi.org/10.1609/aaai.v37i13.26805.
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
Conversational Systems (CSys) represent practical and tangible outcomes of advances in NLP and AI. CSys see continuous improvements through unsupervised training of large language models (LLMs) on a humongous amount of generic training data. However, when these CSys are suggested for use in domains like Mental Health, they fail to match the acceptable standards of clinical care, such as the clinical process in Patient Health Questionnaire (PHQ-9). The talk will present, Knowledge-infused Learning (KiL), a paradigm within NeuroSymbolic AI that focuses on making machine/deep learning models (i) learn over knowledge-enriched data, (ii) learn to follow guidelines in process-oriented tasks for safe and reasonable generation, and (iii) learn to leverage multiple contexts and stratified knowledge to yield user-level explanations. KiL established Knowledge-Intensive Language Understanding, a set of tasks for assessing safety, explainability, and conceptual flow in CSys.