Oates, TimKumar, Arjun2019-10-112019-10-112016-01-0111544http://hdl.handle.net/11603/15481Neural networks have attracted significant interest in recent years due to their exceptional performance in various domains ranging from natural language processing to image identification and classification. Modern deep neural networks demonstrate state-of-the-art results in complex tasks such as epileptic seizure detection and time series classification. The internal architecture of these networks, in terms of learned representations, still remains opaque. This research addresses the first step in the long term motivation to construct a bi-directional connection between the raw input data and their symbolic representations. In this research, we examined whether a denoising autoencoder can internally find correlated principal features from input images and their symbolic representations which can be used to generate one from the other. Our results indicate that using symbolic representations along with the raw inputs generates better reconstructions. Our network was able to construct the symbolic representations from the input as well as input instances from their symbolic representations.This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.eduAutoencoderDeep Neural NetworksMachine LearningSymbolic KnowledgeConnecting Deep Neural Networks with Symbolic KnowledgeText