The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference
Loading...
Links to Files
Author/Creator ORCID
Date
2023-01-11
Type of Work
Department
Program
Citation of Original Publication
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Attribution-NonCommercial-ShareAlike 4.0 International
Attribution-NonCommercial-ShareAlike 4.0 International
Subjects
Abstract
With a constant increase of learned parameters, modern neural language models become increasingly more powerful. Yet, explaining these complex model's behavior remains a widely unsolved problem. In this paper, we discuss the role interactive visualization can play in explaining NLP models (XNLP). We motivate the use of visualization in relation to target users and common NLP pipelines. We also present several use cases to provide concrete examples on XNLP with visualization. Finally, we point out an extensive list of research opportunities in this field.