Building Robust Human Activity Recognition Models from Unlabeled Data
dc.contributor.advisor | Roy, Nirmalya | |
dc.contributor.author | Faridee, Abu Zaher Md | |
dc.contributor.department | Information Systems | |
dc.contributor.program | Information Systems | |
dc.date.accessioned | 2023-04-05T14:17:31Z | |
dc.date.available | 2023-04-05T14:17:31Z | |
dc.date.issued | 2022-01-01 | |
dc.description.abstract | Machine learning-driven wearable sensor-based human activity recognition (HAR) systems have experienced meteoric popularity in recent years in healthcare, entertainment, and physical fitness applications, but their large-scale adoption has been hampered by several open challenges. The availability of rapidly evolving consumer-grade wearable devices (smart-watch, smart-ring, ear-worns) and the existence of substantial variability in the activities performed by a large number of users � each with their own personal style and demographic variations and the wearables potentially placed in different body positions introduce significant domain and category shifts. The cost-prohibitive nature of developing a large corpus of annotated samples to cover all these heterogeneities is a major hindrance to the development and adoption of scalable supervised HAR models. In response, the recent machine learning literature has increasingly relied on discovering salient features from unlabeled samples. However, these models still impart some restrictions on the model architecture (i.e., inability to handle simultaneous heterogeneities, the requirement of labeled samples or synchronized data collection with multiple sensors, and lack of interpretability). Moreover, their performance still lags behind their supervised counterparts, hindering real-world adoption. In this thesis, we focus on building scalable machine learning models for HAR that are robust against domain shifts with minimal-to-no extra-label information and discover the optimum transferability of the representations between the domains. To that end, we propose a number of deep self-supervised, unsupervised, adversarial representation learning and learnable data augmentation techniques. We first present, AugToAct, a self-supervised representation learning method that utilizes random data transformation with a reconstruction loss to automatically learn salient features from unlabeled samples and retains over 80% F1 score with only 6% labeled samples. We then extend this self-supervised module in a cross-user semi-supervised domain adaptation setup, where it outperforms most state-of-the-art models by a 5% F1 score. In our next work StranGAN, we propose a novel interpretable unsupervised domain adaptation method by adversarially learning a set of affine transformations to align the raw data distributions of source and target domain unlabeled samples without (a) needing to modify the source classifier, (b) having no access to synchronized source and target labeled samples while outperforming state-of-the-art by 5% F1 score. Finally, we present CoDEm, which exploits the domain label meta-data (subjects' gender, sensor position, etc) to learn a set of domain embeddings to capture the salient features pertaining to the underlying heterogeneity. CoDEm provides up to 9.5% improved F1 performance compared to several multi-task learning setups in three public datasets by utilizing these domain embeddings with a novel residual attention mechanism without any loss balancing hyper-parameter search. | |
dc.format | application:pdf | |
dc.genre | dissertations | |
dc.identifier | doi:10.13016/m2rzwo-nh6q | |
dc.identifier.other | 12603 | |
dc.identifier.uri | http://hdl.handle.net/11603/27363 | |
dc.language | en | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Information Systems Collection | |
dc.relation.ispartof | UMBC Theses and Dissertations Collection | |
dc.relation.ispartof | UMBC Graduate School Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.source | Original File Name: Faridee_umbc_0434D_12603.pdf | |
dc.subject | adversarial learning | |
dc.subject | domain adaptation | |
dc.subject | human activity recognition | |
dc.subject | self supervision | |
dc.subject | unsupervised learning | |
dc.subject | wearable sensing | |
dc.title | Building Robust Human Activity Recognition Models from Unlabeled Data | |
dc.type | Text | |
dcterms.accessRights | Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan through a local library, pending author/copyright holder's permission. | |
dcterms.accessRights | This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- Faridee_umbc_0434D_12603.pdf
- Size:
- 14.09 MB
- Format:
- Adobe Portable Document Format