MIND: A Semantics-based Multimodal Interpretation Framework for Conversational Systems

Date

Department

Program

Citation of Original Publication

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Subjects

Abstract

To facilitate a full-fledged multimodal humanmachine conversation, we are developing an intelligent infrastructure called Responsive Information Architect (RIA). As a part of this effort, we are building a semantics-based multimodal interpretation framework called MIND (Multimodal Interpretation for Natural Dialog). MIND addresses both multimodal input understanding and discourse interpretation in a conversation setting. In particular, MIND has two unique features. First, MIND characterizes intention and attention of user inputs and the entire conversation from multiple dimensions. This fine grained semantic model provides a computational basis for multimodal interpretation. Second, MIND uses rich contexts such as conversation discourse, domain knowledge, visual context, user and environment models to facilitate multimodal understanding. This approach allows MIND to improve understanding of user inputs, including those abbreviated or imprecise ones.