Task Integration in Multimodal Speech Recognition Environments
Links to Fileshttps://dl.acm.org/citation.cfm?id=270982
MetadataShow full item record
Type of Work7 pages
Citation of Original PublicationMichael A. Grasso and Tim Finin, Task Integration in Multimodal Speech Recognition Environments, XRDS: Crossroads, The ACM Magazine for Students - Special issue on human computer interaction archive Volume 3 Issue 3, March 1997 , DOI : 10.1145/270974.270982
RightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
A model of complementary behavior has been proposed based on arguments that direct manipulation and speech recognition interfaces have reciprocal strengths and weaknesses. This suggests that user interface performance and acceptance may increase by adopting a multimodal approach that combines speech and direct manipulation. More theoretical work is needed in order to understand how to leverage this advantage. In this paper, a framework is presented to empirically evaluate the types of tasks that might benefit from such a multimodal interface.