A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement

dc.contributor.authorSarkar, Surjodeep
dc.contributor.authorGaur, Manas
dc.contributor.authorChen, Lujie Karen
dc.contributor.authorGarg, Muskan
dc.contributor.authorSrivastava, Biplav
dc.date.accessioned2023-10-26T19:40:51Z
dc.date.available2023-10-26T19:40:51Z
dc.date.issued2023-10-12
dc.description.abstractVirtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations.en_US
dc.description.urihttps://www.frontiersin.org/articles/10.3389/frai.2023.1229805/fullen_US
dc.format.extent14 pagesen_US
dc.genrejournal articlesen_US
dc.identifierdoi:10.13016/m2ag58-xjgb
dc.identifier.citationSarkar S, Gaur M, Chen LK, Garg M and Srivastava B (2023). A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement. Front. Artif. Intell. 6:1229805. doi: 10.3389/frai.2023.1229805en_US
dc.identifier.urihttps://doi.org/10.3389/frai.2023.1229805
dc.identifier.urihttp://hdl.handle.net/11603/30420
dc.language.isoen_USen_US
dc.publisherFrontiersen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Information Systems Department
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.rightsAttribution 4.0 International (CC BY 4.0 DEED)*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.titleA review of the explainability and safety of conversational agents for mental health to identify avenues for improvementen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-0147-2777en_US
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230en_US
dcterms.creatorhttps://orcid.org/0000-0002-7185-8405en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
frai-06-1229805.pdf
Size:
1.99 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: