A Framework for Integrating Deep Learning into the Development of Accessibility Tools


Author/Creator ORCID



Type of Work


Mathematics and Computer Science


Citation of Original Publication



This thesis specifically focuses on neural networks, but much of what is discussed will be transferable to the other machine learning techniques. It provides the resources and guidance along with a real research project to illustrate how everything comes together. The aim is to help someone with an interest in programming, super computers, and AI explore and combine these subsets of computer science in a more streamlined fashion. While we will not cover all the surrounding skills and knowledge necessary, supplementary resources and explanations are provided to help bridge the gap of information for reader to enable them to pursue the content in this guide. This paper will cover hardware, libraries, licensing, techniques, as well as other crucial, yet illusive information. The intent is to create a clear path to the understanding and implementation of machine learning techniques. The main research that serves as the example applies modular preprocessing techniques, high-performance computing, and neural network architecture approaches to recognize and localize audio. This research will be the backbone of software that gives visual indicators to directional sound in video games, many of which have insufficient accessibility features for the Deaf and Hard of Hearing community. The success of the algorithm will be measured in terms of accuracy, runtime of training and real-time processing of data, and computational load. The runtime, real-time, and computational load measurements of success are largely absent in academic papers on artificial intelligence, and this limits the potential of the academic community to leverage them in solutions for the real world. This paper aims to help bridge this gap in the literature and is written during a working stage of the research, so not all components are complete.