Knowledge-Infused Self Attention Transformers

dc.contributor.authorRoy, Kaushik
dc.contributor.authorZi, Yuxin
dc.contributor.authorNarayanan, Vignesh
dc.contributor.authorGaur, Manas
dc.contributor.authorSheth, Amit
dc.date.accessioned2023-07-18T19:42:02Z
dc.date.available2023-07-18T19:42:02Z
dc.date.issued2023-06-23
dc.descriptionSecond Workshop on Knowledge Augmented Methods for NLP, colocated with KDD 2023en
dc.description.abstractTransformer-based language models have achieved impressive success in various natural language processing tasks due to their ability to capture complex dependencies and contextual information using self-attention mechanisms. However, they are not without limitations. These limitations include hallucinations, where they produce incorrect outputs with high confidence, and alignment issues, where they generate unhelpful and unsafe outputs for human users. These limitations stem from the absence of implicit and missing context in the data alone. To address this, researchers have explored augmenting these models with external knowledge from knowledge graphs to provide the necessary additional context. However, the ad-hoc nature of existing methods makes it difficult to properly analyze the effects of knowledge infusion on the many moving parts or components of a transformer. This paper introduces a systematic method for infusing knowledge into different components of a transformer-based model. A modular framework is proposed to identify specific components within the transformer architecture, such as the self-attention mechanism, encoder layers, or the input embedding layer, where knowledge infusion can be applied. Additionally, extensive experiments are conducted on the General Language Understanding Evaluation (GLUE) benchmark tasks, and the findings are reported. This systematic approach aims to facilitate more principled approaches to incorporating knowledge into language model architectures.en
dc.description.sponsorshipThis work is built on prior work [15–30], and supported by the National Science Foundation under Grant 2133842, “EAGER: Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning" [31–33].en
dc.description.urihttps://arxiv.org/abs/2306.13501en
dc.format.extent5 pagesen
dc.genreconference papers and proceedingsen
dc.genrepostprintsen
dc.identifierdoi:10.13016/m2jfbu-pk3n
dc.identifier.urihttps://doi.org/10.48550/arXiv.2306.13501
dc.identifier.urihttp://hdl.handle.net/11603/28743
dc.language.isoenen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsAttribution 4.0 International (CC BY 4.0)*
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.titleKnowledge-Infused Self Attention Transformersen
dc.typeTexten
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230en

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2306.13501.pdf
Size:
923.95 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: