Designing Speech, Acoustic and Multimodal Interactions

dc.contributor.authorMunteanu, Cosmin
dc.contributor.authorIrani, Pourang
dc.contributor.authorOviatt, Sharon
dc.contributor.authorAylett, Matthew
dc.contributor.authorPenn, Gerald
dc.contributor.authorPan, Shimei
dc.contributor.authorSharma, Nikhil
dc.contributor.authorRudzicz, Frank
dc.contributor.authorGomez, Randy
dc.contributor.authorCowan, Ben
dc.contributor.authorNakamura, Keisuke
dc.date.accessioned2025-01-08T15:08:54Z
dc.date.available2025-01-08T15:08:54Z
dc.date.issued2017-05-06
dc.descriptionCHI '17: CHI Conference on Human Factors in Computing Systems, Denver Colorado, USA, May 6 - 11, 2017
dc.description.abstractTraditional interfaces are continuously being replaced by mobile, wearable, or pervasive interfaces. Yet when it comes to the input and output modalities enabling our interactions, we have yet to fully embrace some of the most natural forms of communication and information processing that humans possess: speech, language, gestures, thoughts. Very little HCI attention has been dedicated to designing and developing spoken language, acoustic-based, or multimodal interaction techniques, especially for mobile and wearable devices. In addition to the enormous, recent engineering progress in processing such modalities, there is now sufficient evidence that many real-life applications do not require 100% accuracy of processing multimodal input to be useful, particularly if such modalities complement each other. This multidisciplinary, one-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions especially with mobile and wearable devices, and to look at how we can leverage recent advances in speech, acoustic, and multimodal processing.
dc.description.urihttps://dl.acm.org/doi/10.1145/3027063.3027086
dc.format.extent8 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m28lh8-7ywk
dc.identifier.citationMunteanu, Cosmin, Pourang Irani, Sharon Oviatt, Matthew Aylett, Gerald Penn, Shimei Pan, Nikhil Sharma, et al. “Designing Speech, Acoustic and Multimodal Interactions.” In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 601–8. CHI EA ’17. New York, NY, USA: Association for Computing Machinery, 2017. https://doi.org/10.1145/3027063.3027086.
dc.identifier.urihttps://doi.org/10.1145/3027063.3027086
dc.identifier.urihttp://hdl.handle.net/11603/37204
dc.language.isoen_US
dc.publisherACM
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectNatural Interaction
dc.subjectEmotion Recognition
dc.subjectDeep Neural Networks
dc.subjectSpeech Recognition
dc.subjectSynthetic Speech
dc.subjectElectromyography
dc.subjectBrain-Computer Interface
dc.titleDesigning Speech, Acoustic and Multimodal Interactions
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5989-8543

Files