A Scalable and Low Power Deep Convolutional Neural Network for Multimodal Data Classification In Embedded Real-Time Systems

dc.contributor.advisorMohsenin, Tinoosh
dc.contributor.authorJafari, AliJafari, Ali
dc.contributor.departmentComputer Science and Electrical Engineering
dc.contributor.programEngineering, Computer
dc.date.accessioned2019-10-11T12:34:15Z
dc.date.available2019-10-11T12:34:15Z
dc.date.issued2017-01-01
dc.description.abstractMultimodal time series signals are generated by different sensors such as accelerometers, magnetometers, gyroscopes and heart rate monitors, where each sensor usually has various number of input channels and sampling rates. Different signal processing techniques such as feature extraction and classification are employed to process the data generated by each sensor modality which: 1) can lead to a long design time, 2) requires expert knowledge in designing the features, and 3) is unscalable when adding new sensors. Moreover, with recent advances in Internet of Things (IoT) and wearable devices, a major challenge is the ability to efficiently deploy the multimodal signal processing techniques in embedded, resource-bound settings that have strict power and area budgets. In this dissertations we target the previously mentioned challenges. In the first contribution, we propose "SensorNet" which is a scalable deep convolutional neural network designed to classify multimodal time series signals. The raw time series signals generated by different sensor modalities with different sampling rates are first fused into images; then, a Deep Convolutional Neural Network (DCNN) is utilized to automatically learn shared features in the images and perform the classification. SensorNet: (1) is scalable as it can process different types of time series data with variety of input channels and sampling rates. (2) does not need to employ separate signal processing techniques for processing the data generated by each sensor modality. (3) does not require expert knowledge for extracting features for each sensor data. (4) makes it easy and fast to adapt to new sensor modalities with a different sampling rate. (5) achieves very high detection accuracy for different case studies. (6) has a very efficient architecture which makes it suitable to be employed at IoTs and wearable devices. In the second contribution, we propose a custom low power hardware architecture for the efficient deployment of SensorNet at resource-limited embedded devices, which can perform the entire SensorNet signal processing in real-time with minimal energy consumption. The proposed architecture is fully reconfigurable for different applications with various requirements. Finally, we propose a stand-alone dual-mode Tongue Drive System (sdTDS) which employs SensorNet to perform all required multimodal signal processing in real-time. sdTDS is a wireless wearable headset and individuals with severe disabilities can use it to potentially control their environment such as computer, smartphone and wheelchair using their voluntary tongue and head movements. SensorNet performance is evaluated using three different case studies including PhysicalActivity Monitoring, sdTDS and Stress Detection and it achieves an average detection accuracy of 98%, 96.2% and 94% for each case study, respectively. Furthermore, we implement SensorNet using our custom hardware architecture on Xilinx FPGA (Artix-7) which consumes 17 mJ, 9 mJ and 3.5 mJ energy for Physical Activity Monitoring, sdTDS and Stress Detection case studies, respectively. To further reduce the power consumption, SensorNet is implemented using ASIC at the post layout level in 65-nm CMOS technology which consumes approximately 7x lower power compared to the FPGA implementation. Additionally, SensorNet is implemented on NVIDIA Jetson TX2 SoC (CPU+GPU) which is an embedded commercial off-the-shelf platform. Compared to TX2 single-core CPU and GPU implementations, FPGA-based SensorNet obtains 8x and 12x improvement in power consumption, and 71x and 3x improvement in energy consumption. Furthermore, SensorNet achieves 200x, 63x, 27x lower energy consumption compared to previous related work. SensorNet is considered as a generic deep neural network that can accommodates a wide range of applications with minimal effort.
dc.genredissertations
dc.identifierdoi:10.13016/m2xpie-8wo2
dc.identifier.other11782
dc.identifier.urihttp://hdl.handle.net/11603/15027
dc.languageen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Theses and Dissertations Collection
dc.relation.ispartofUMBC Graduate School Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
dc.sourceOriginal File Name: Jafari_umbc_0434D_11782.pdf
dc.subjectClassification
dc.subjectDeep Neural Network
dc.subjectFPGA
dc.subjectHardware architecture
dc.subjectLow Power
dc.subjectMultimodal data
dc.titleA Scalable and Low Power Deep Convolutional Neural Network for Multimodal Data Classification In Embedded Real-Time Systems
dc.typeText
dcterms.accessRightsAccess limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan through a local library, pending author/copyright holder's permission.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Jafari_umbc_0434D_11782.pdf
Size:
9.95 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
pdf021.pdf
Size:
461.28 KB
Format:
Adobe Portable Document Format
Description: