Context-based multimodal input understanding in conversational systems
Links to Files
Collections
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Chai, J., Shimei Pan, M.X. Zhou, and K. Houck. “Context-Based Multimodal Input Understanding in Conversational Systems.” In Proceedings. Fourth IEEE International Conference on Multimodal Interfaces, 87–92, 2002. https://doi.org/10.1109/ICMI.2002.1166974.
Rights
© 2002 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Abstract
In a multimodal human-machine conversation, user inputs are often abbreviated or imprecise. Sometimes, merely fusing multimodal inputs together cannot derive a complete understanding. To address these inadequacies, we are building a semantics-based multimodal interpretation framework called MIND (Multimodal Interpretation for Natural Dialog). The unique feature of MIND is the use of a variety of contexts (e.g., domain context and conversation context) to enhance multimodal fusion. In this paper we present a semantically rich modeling scheme and a context-based approach that enable MIND to gain a full understanding of user inputs, including ambiguous and incomplete ones.
