Task Integration in Multimodal Speech Recognition Environments

Author/Creator ORCID

Date

1997-04-01

Department

Program

Citation of Original Publication

Michael A. Grasso and Tim Finin, Task Integration in Multimodal Speech Recognition Environments, XRDS: Crossroads, The ACM Magazine for Students - Special issue on human computer interaction archive Volume 3 Issue 3, March 1997 , DOI : 10.1145/270974.270982

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

A model of complementary behavior has been proposed based on arguments that direct manipulation and speech recognition interfaces have reciprocal strengths and weaknesses. This suggests that user interface performance and acceptance may increase by adopting a multimodal approach that combines speech and direct manipulation. More theoretical work is needed in order to understand how to leverage this advantage. In this paper, a framework is presented to empirically evaluate the types of tasks that might benefit from such a multimodal interface.