Show simple item record

dc.contributor.authorAbdolrahmani, Ali
dc.contributor.authorGupta, Maya Howes
dc.contributor.authorVader, Mei-Lian
dc.contributor.authorKuber, Ravi
dc.contributor.authorBranham, Stacy
dcterms.creatorhttps://orcid.org/0000-0003-1095-3772en_US
dc.date.accessioned2022-11-22T22:12:54Z
dc.date.available2022-11-22T22:12:54Z
dc.date.issued2021-04-23
dc.descriptionProceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21), May 2021.en_US
dc.description.abstractVoice assistants (VAs) – like Amazon Alexa or Siri – offer hands-/eyes-free interactions that are beneficial to a range of users, including individuals who are blind, to fulfill tasks that are otherwise difficult or inaccessible. While these interfaces model conversational interactions to achieve simple tasks, there have been recent calls for VAs that model more transactional interactions for a wider range of complex activities. In this study, we explored the extension of VAs’ capabilities in the context of indoor navigation through mixed-ability focus groups with blind and sighted airport travelers. We found high overlap in the difficulties encountered by blind and sighted travelers, as well as shared interest in a voice-activated travel assistant to improve travel experiences. Leveraging user-elicited recommendations, we present interaction design examples that showcase customization of different and multiple modalities, which collectively demonstrate how VAs can more broadly achieve transactional interactions in complex task scenarios.en_US
dc.description.urihttps://www.youtube.com/watch?v=OaatZSjdiKgen_US
dc.format.extent30 secondsen_US
dc.genrevideo recordingsen_US
dc.genreconference papers and proceedingsen_US
dc.identifierdoi:10.13016/m2iqit-udde
dc.identifier.citationAli Abdolrahmani, Maya Howes Gupta, Mei-Lian Vader, Ravi Kuber, and Stacy Branham. 2021. Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 495, 1–16. https://doi.org/10.1145/3411764.3445638en_US
dc.identifier.urihttp://hdl.handle.net/11603/26356
dc.language.isoen_USen_US
dc.publisherACMen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Center for Women in Technology (CWIT)
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.subjectBlind and Sighted Travelersen_US
dc.subjectAccessibilityen_US
dc.subjectVoice Assistantsen_US
dc.subjectIndoor Navigationen_US
dc.titleTowards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelersen_US
dc.typeMoving Imageen_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record