TDLR: Top (Semantic)-Down (Syntactic) Language Representation
dc.contributor.author | Rawte, Vipula | |
dc.contributor.author | Chakraborty, Megha | |
dc.contributor.author | Roy, Kaushik | |
dc.contributor.author | Gaur, Manas | |
dc.contributor.author | Faldu, Keyur | |
dc.contributor.author | Kikani, Prashant | |
dc.contributor.author | Akbari, Hemang | |
dc.contributor.author | Sheth, Amit | |
dc.date.accessioned | 2022-11-03T15:51:13Z | |
dc.date.available | 2022-11-03T15:51:13Z | |
dc.date.issued | 2022-10-20 | |
dc.description | NeurIPS '22 Workshop on All Things Attention: Bridging Different Perspectives on Attention, Dec 02 2022 New Orleans, Louisiana, United States. | |
dc.description.abstract | anguage understanding involves processing text with both the grammatical and common-sense contexts of the text fragments. The text "I went to the grocery store and brought home a car" requires both the grammatical context (syntactic) and common-sense context (semantic) to capture the oddity in the sentence. Contextualized text representations learned by Language Models (LMs) are expected to capture a variety of syntactic and semantic contexts from large amounts of training data corpora. Recent work such as ERNIE has shown that infusing the knowledge contexts, where they are available in LMs, results in significant performance gains on General Language Understanding (GLUE) benchmark tasks. However, to our knowledge, no knowledge-aware model has attempted to infuse knowledge through top-down semantics-driven syntactic processing (Eg: Common-sense to Grammatical) and directly operated on the attention mechanism that LMs leverage to learn the data context. We propose a learning framework Top-Down Language Representation (TDLR) to infuse common-sense semantics into LMs. In our implementation, we build on BERT for its rich syntactic knowledge and use the knowledge graphs ConceptNet and WordNet to infuse semantic knowledge. | en_US |
dc.description.uri | https://openreview.net/forum?id=XcTBJ0Ak59 | en_US |
dc.format.extent | 5 pages | en_US |
dc.genre | conference papers and proceedings | en_US |
dc.identifier | doi:10.13016/m2zruc-k4x1 | |
dc.identifier.citation | Rawte, Vipula, Megha Chakraborty, Kaushik Roy, Manas Gaur, Keyur Faldu, Prashant Kikani, Hemang Akbari, and Amit P. Sheth. “TDLR: Top Semantic-Down Syntactic Language Representation,” In NeurIPS '22 Workshop on All Things Attention: Bridging Different Perspectives on Attention. https://openreview.net/forum?id=XcTBJ0Ak59. | |
dc.identifier.uri | http://hdl.handle.net/11603/26255 | |
dc.language.iso | en_US | en_US |
dc.publisher | OpenReview | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | en_US |
dc.subject | UMBC Ebiquity Research Group | |
dc.title | TDLR: Top (Semantic)-Down (Syntactic) Language Representation | en_US |
dc.type | Text | en_US |
Files
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 2.56 KB
- Format:
- Item-specific license agreed upon to submission
- Description: