Representation Learning on Time Series with Symbolic Approximation and Deep Learning

Author/Creator

Author/Creator ORCID

Date

2016-01-01

Department

Computer Science and Electrical Engineering

Program

Computer Science

Citation of Original Publication

Rights

This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.

Abstract

Most real-world data has a temporal component, whether it is measurements of natural (weather, sound) or man-made (stock market, robotics and even speech and language) phenomena. Analysis of temporal data has been the subject of active research for decades and is still considered to be a challenge in machine learning and data mining, due to the intrinsically structured temporal correlation. In this thesis, we propose three different novel approaches to represent and model time-series. Time-Warping SAX and Pooling SAX are two extensions of the vanilla SAX approach that is used as a symbolic representation of time series. Time-Warping SAX extracts linear temporal dependencies by building a time-delay embedding vector to construct more informative SAX words. Pooling SAX applies a non-parametric weighting scheme to extract significant variables. These are data adaptive models that achieve state-of-the-art accuracy on time-series classification problems. We also propose the Gramian Angular Field (GAF) and Markov Transition Field (MTF) as two novel approaches to encode a time-series as an image. These representations not only demonstrate potential for visual inspection by humans, but when they are combined with deep learning approaches (Convolutional Networks and Denoised Auto-encoders). They achieve state-of-the-art performance compared to other modern algorithms on classification and regression/imputation problems for different type of temporal data and trajectories. GAF and MTF are non-data adaptive approaches that allow us to learn models and extract the abstract representations supported by model-based approaches. Finally, we develop a set of exponential-form based error estimator (NRAE/NAAE) with their learning approaches (Adaptive Training) to attach the non-convex optimization problems in training deep neural networks. Both in theory and practice, they are able to achieve optimality on accuracy and robustness against outliers/noise. They provide another perspectives to debunk the non-convexity of deep learning in high dimensional learning and recurrent architectures and benefit the modeling of high-dimensional temporal data.