Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers

dc.contributor.authorAli Abdolrahmani
dc.contributor.authorMaya Gupta
dc.contributor.authorMei-Lian Vader
dc.contributor.authorRavi Kuber
dc.contributor.authorStacy M. Branham
dc.date.accessioned2023-12-07T18:37:09Z
dc.date.available2023-12-07T18:37:09Z
dc.date.issued2021-05-07
dc.descriptionCHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, May 2021
dc.description.abstractVoice assistants (VAs) – like Amazon Alexa or Siri – offer hands-/eyes-free interactions that are beneficial to a range of users, including individuals who are blind, to fulfill tasks that are otherwise difficult or inaccessible. While these interfaces model conversational interactions to achieve simple tasks, there have been recent calls for VAs that model more transactional interactions for a wider range of complex activities. In this study, we explored the extension of VAs’ capabilities in the context of indoor navigation through mixed-ability focus groups with blind and sighted airport travelers. We found high overlap in the difficulties encountered by blind and sighted travelers, as well as shared interest in a voice-activated travel assistant to improve travel experiences. Leveraging user-elicited recommendations, we present interaction design examples that showcase customization of different and multiple modalities, which collectively demonstrate how VAs can more broadly achieve transactional interactions in complex task scenarios.
dc.description.sponsorshipThe authors would like to thank the participants for their valuable feedback; Antony Rishin Mukkath Roy (UMBC) and Priyanka Hitesh Soni (UMBC) for their support facilitating focus group sessions; and Areba Shahab Hazari (UCI), Tifany Tseng (UCI), Hipolito Ruiz (UCI), Sruti Vijaykumar (UMBC) and Kelly Dickenson (UMBC) for their assistance with transcription and coding. This project is supported by Toyota Manufacturing North America (000890-00001)
dc.description.urihttps://dl.acm.org/doi/10.1145/3411764.3445638
dc.format.extent16 pages
dc.genreconference papers and proceedings
dc.identifier.citationAbdolrahmani, Ali, Maya Howes Gupta, Mei-Lian Vader, Ravi Kuber, and Stacy Branham. “Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–16. CHI ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3411764.3445638.
dc.identifier.urihttps://doi.org/10.1145/3411764.3445638
dc.identifier.urihttp://hdl.handle.net/11603/31032
dc.language.isoen_US
dc.publisherACM
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectAccessibility
dc.subjectVoice Assistants
dc.subjectIndoor Navigation
dc.subjectBlind and Sighted Travelers
dc.titleTowards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0003-1095-3772

Files

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: