The Integrality of Speech in Multimodal Interfaces

dc.contributor.authorGrasso, Michael A.
dc.contributor.authorEbert, David S.
dc.contributor.authorFinin, Timothy W.
dc.date.accessioned2019-02-07T19:22:39Z
dc.date.available2019-02-07T19:22:39Z
dc.date.issued1998-11-30
dc.description.abstractA framework of complementary behavior has been proposed which maintains that direct manipulation and speech interfaces have reciprocal strengths and weaknesses. This suggests that user interface performance and acceptance may increase by adopting a multimodal approach that combines speech and direct manipulation. This effort examined the hypothesis that the speed, accuracy, and acceptance of multimodal speech and direct manipulation interfaces will increase when the modalities match the perceptual structure of the input attributes. A software prototype that supported a typical biomedical data collection task was developed to test this hypothesis. A group of 20 clinical and veterinary pathologists evaluated the prototype in an experimental setting using repeated measures. The results of this experiment supported the hypothesis that the perceptual structure of an input task is an important consideration when designing a multimodal computer interface. Task completion time, the number of speech errors, and user acceptance improved when interface best matched the perceptual structure of the input attributes.en_US
dc.description.sponsorshipThis research was supported in part by grant 2R44RR07989-02A2 from the National Center for Research Resources.en_US
dc.description.urihttps://dl.acm.org/citation.cfm?id=300521en_US
dc.format.extent23 pagesen_US
dc.genrejournal articles preprintsen_US
dc.identifierdoi:10.13016/m2pxfp-dsdt
dc.identifier.citationMichael A. Grasso, David Ebert, and Tim Finin, The Integrality of Speech in Multimodal Interfaces, ACM Transactions on Computer-Human Interaction (TOCHI), Volume 5 Issue 4, Dec. 1998 Pages 303-325 , DOI : 10.1145/300520.300521en_US
dc.identifier.uri10.1145/300520.300521
dc.identifier.urihttp://hdl.handle.net/11603/12733
dc.language.isoen_USen_US
dc.publisherACMen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rights© ACM, 1998. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, VOL 5, ISS 4, Dec. 1998 http://doi.acm.org/10.1145/300520.300521
dc.subjectdesignen_US
dc.subjectexperimentationen_US
dc.subjecthuman factorsen_US
dc.subjectmeasurementen_US
dc.subjectperformanceen_US
dc.subjecttheoryen_US
dc.subjectdirect manipulationen_US
dc.subjectinput devicesen_US
dc.subjectintegralityen_US
dc.subjectmedical informaticsen_US
dc.subjectmultimodalen_US
dc.subjectnatural-language processingen_US
dc.subjectpathologyen_US
dc.subjectperceptual structureen_US
dc.subjectseparabilityen_US
dc.subjectspeech recognitionen_US
dc.subjectUMBC Ebiquity Research Groupen_US
dc.titleThe Integrality of Speech in Multimodal Interfacesen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
191.pd.pdf
Size:
175.3 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: