Matuszek, CynthiaLewis, Timothy Arthur2021-01-292021-01-292018-01-0111856http://hdl.handle.net/11603/20674Communication is ubiquitous in the world. The need to describe one's needs and desires is a necessity in all aspects of life. When a person loses the ability to physically speak, and the motor skills to activate speech systems that require physical input, there are limited alternatives for communication. Eye-gaze tracking offers unique opportunities to support communication; however, many of the current eye-gaze tracking communication programs are unintuitive, inaccurate, and outdated. Many of the existing communication programs employ a visual keyboard that the user can use to visually type out the words and sentences they wish to speak. In this paper, I investigate an alternative to the standard virtual keyboard methods, developing a communicative flow that is intuitive and easy to use without speech. I create a simple application for participants to construct messages. I introduce a language model concept that gathers information about the environment and using language groundings, determines the salience of items within a scene. This, in turn, allows the language model to make context sensitive text prediction to better assist a user in communicating quickly and effectively. Using a gaze tracking device as input to the interface, I then perform a user study to evaluate the efficacy and intuitiveness of the communication device. I found that when a user is presented with context based portions of speech, they can communicate easier and quicker than with a generic language model.application:pdfAugmentative and Alternative Communication Interface with a Context-based, Predictive Language ModelText