Abdolrahmani, AliGupta, Maya HowesVader, Mei-LianKuber, RaviBranham, Stacy2022-11-222022-11-222021-04-23Abdolrahmani, Ali, Maya Howes Gupta, Mei Lian Vader, Ravi Kuber and Stacy Branham. "Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers", 2021. https://www.youtube.com/watch?v=OaatZSjdiKg.http://hdl.handle.net/11603/26356Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21), May 2021.Voice assistants (VAs) – like Amazon Alexa or Siri – offer hands-/eyes-free interactions that are beneficial to a range of users, including individuals who are blind, to fulfill tasks that are otherwise difficult or inaccessible. While these interfaces model conversational interactions to achieve simple tasks, there have been recent calls for VAs that model more transactional interactions for a wider range of complex activities. In this study, we explored the extension of VAs’ capabilities in the context of indoor navigation through mixed-ability focus groups with blind and sighted airport travelers. We found high overlap in the difficulties encountered by blind and sighted travelers, as well as shared interest in a voice-activated travel assistant to improve travel experiences. Leveraging user-elicited recommendations, we present interaction design examples that showcase customization of different and multiple modalities, which collectively demonstrate how VAs can more broadly achieve transactional interactions in complex task scenarios.30 secondsen-USThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.Blind and Sighted TravelersAccessibilityVoice AssistantsIndoor NavigationTowards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted TravelersMoving Image