Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers

Date

2021-04-23

Department

Program

Citation of Original Publication

Abdolrahmani, Ali, Maya Howes Gupta, Mei Lian Vader, Ravi Kuber and Stacy Branham. "Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers", 2021. https://www.youtube.com/watch?v=OaatZSjdiKg.

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

Voice assistants (VAs) – like Amazon Alexa or Siri – offer hands-/eyes-free interactions that are beneficial to a range of users, including individuals who are blind, to fulfill tasks that are otherwise difficult or inaccessible. While these interfaces model conversational interactions to achieve simple tasks, there have been recent calls for VAs that model more transactional interactions for a wider range of complex activities. In this study, we explored the extension of VAs’ capabilities in the context of indoor navigation through mixed-ability focus groups with blind and sighted airport travelers. We found high overlap in the difficulties encountered by blind and sighted travelers, as well as shared interest in a voice-activated travel assistant to improve travel experiences. Leveraging user-elicited recommendations, we present interaction design examples that showcase customization of different and multiple modalities, which collectively demonstrate how VAs can more broadly achieve transactional interactions in complex task scenarios.