Building Robust Human Activity Recognition Models from Unlabeled Data

dc.contributor.advisorRoy, Nirmalya
dc.contributor.authorFaridee, Abu Zaher Md
dc.contributor.departmentInformation Systems
dc.contributor.programInformation Systems
dc.date.accessioned2023-04-05T14:17:31Z
dc.date.available2023-04-05T14:17:31Z
dc.date.issued2022-01-01
dc.description.abstractMachine learning-driven wearable sensor-based human activity recognition (HAR) systems have experienced meteoric popularity in recent years in healthcare, entertainment, and physical fitness applications, but their large-scale adoption has been hampered by several open challenges. The availability of rapidly evolving consumer-grade wearable devices (smart-watch, smart-ring, ear-worns) and the existence of substantial variability in the activities performed by a large number of users � each with their own personal style and demographic variations and the wearables potentially placed in different body positions introduce significant domain and category shifts. The cost-prohibitive nature of developing a large corpus of annotated samples to cover all these heterogeneities is a major hindrance to the development and adoption of scalable supervised HAR models. In response, the recent machine learning literature has increasingly relied on discovering salient features from unlabeled samples. However, these models still impart some restrictions on the model architecture (i.e., inability to handle simultaneous heterogeneities, the requirement of labeled samples or synchronized data collection with multiple sensors, and lack of interpretability). Moreover, their performance still lags behind their supervised counterparts, hindering real-world adoption. In this thesis, we focus on building scalable machine learning models for HAR that are robust against domain shifts with minimal-to-no extra-label information and discover the optimum transferability of the representations between the domains. To that end, we propose a number of deep self-supervised, unsupervised, adversarial representation learning and learnable data augmentation techniques. We first present, AugToAct, a self-supervised representation learning method that utilizes random data transformation with a reconstruction loss to automatically learn salient features from unlabeled samples and retains over 80% F1 score with only 6% labeled samples. We then extend this self-supervised module in a cross-user semi-supervised domain adaptation setup, where it outperforms most state-of-the-art models by a 5% F1 score. In our next work StranGAN, we propose a novel interpretable unsupervised domain adaptation method by adversarially learning a set of affine transformations to align the raw data distributions of source and target domain unlabeled samples without (a) needing to modify the source classifier, (b) having no access to synchronized source and target labeled samples while outperforming state-of-the-art by 5% F1 score. Finally, we present CoDEm, which exploits the domain label meta-data (subjects' gender, sensor position, etc) to learn a set of domain embeddings to capture the salient features pertaining to the underlying heterogeneity. CoDEm provides up to 9.5% improved F1 performance compared to several multi-task learning setups in three public datasets by utilizing these domain embeddings with a novel residual attention mechanism without any loss balancing hyper-parameter search.
dc.formatapplication:pdf
dc.genredissertations
dc.identifierdoi:10.13016/m2rzwo-nh6q
dc.identifier.other12603
dc.identifier.urihttp://hdl.handle.net/11603/27363
dc.languageen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Collection
dc.relation.ispartofUMBC Theses and Dissertations Collection
dc.relation.ispartofUMBC Graduate School Collection
dc.relation.ispartofUMBC Student Collection
dc.sourceOriginal File Name: Faridee_umbc_0434D_12603.pdf
dc.subjectadversarial learning
dc.subjectdomain adaptation
dc.subjecthuman activity recognition
dc.subjectself supervision
dc.subjectunsupervised learning
dc.subjectwearable sensing
dc.titleBuilding Robust Human Activity Recognition Models from Unlabeled Data
dc.typeText
dcterms.accessRightsAccess limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan through a local library, pending author/copyright holder's permission.
dcterms.accessRightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Faridee_umbc_0434D_12603.pdf
Size:
14.09 MB
Format:
Adobe Portable Document Format