Augmentative and Alternative Communication Interface with a Context-based, Predictive Language Model

dc.contributor.advisorMatuszek, Cynthia
dc.contributor.authorLewis, Timothy Arthur
dc.contributor.departmentComputer Science and Electrical Engineering
dc.contributor.programComputer Science
dc.date.accessioned2021-01-29T18:12:11Z
dc.date.available2021-01-29T18:12:11Z
dc.date.issued2018-01-01
dc.description.abstractCommunication is ubiquitous in the world. The need to describe one's needs and desires is a necessity in all aspects of life. When a person loses the ability to physically speak, and the motor skills to activate speech systems that require physical input, there are limited alternatives for communication. Eye-gaze tracking offers unique opportunities to support communication; however, many of the current eye-gaze tracking communication programs are unintuitive, inaccurate, and outdated. Many of the existing communication programs employ a visual keyboard that the user can use to visually type out the words and sentences they wish to speak. In this paper, I investigate an alternative to the standard virtual keyboard methods, developing a communicative flow that is intuitive and easy to use without speech. I create a simple application for participants to construct messages. I introduce a language model concept that gathers information about the environment and using language groundings, determines the salience of items within a scene. This, in turn, allows the language model to make context sensitive text prediction to better assist a user in communicating quickly and effectively. Using a gaze tracking device as input to the interface, I then perform a user study to evaluate the efficacy and intuitiveness of the communication device. I found that when a user is presented with context based portions of speech, they can communicate easier and quicker than with a generic language model.
dc.formatapplication:pdf
dc.genretheses
dc.identifierdoi:10.13016/m2em36-lz1x
dc.identifier.other11856
dc.identifier.urihttp://hdl.handle.net/11603/20674
dc.languageen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Theses and Dissertations Collection
dc.relation.ispartofUMBC Graduate School Collection
dc.relation.ispartofUMBC Student Collection
dc.sourceOriginal File Name: Lewis_umbc_0434M_11856.pdf
dc.titleAugmentative and Alternative Communication Interface with a Context-based, Predictive Language Model
dc.typeText
dcterms.accessRightsDistribution Rights granted to UMBC by the author.
dcterms.accessRightsAccess limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
dcterms.accessRightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Lewis_umbc_0434M_11856.pdf
Size:
699.52 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
TimothyArthurLAugmentative_Open.pdf
Size:
43.04 KB
Format:
Adobe Portable Document Format
Description: