Browsing by Subject "domain adaptation"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Building Robust Human Activity Recognition Models from Unlabeled Data(2022-01-01) Faridee, Abu Zaher Md; Roy, Nirmalya; Information Systems; Information SystemsMachine learning-driven wearable sensor-based human activity recognition (HAR) systems have experienced meteoric popularity in recent years in healthcare, entertainment, and physical fitness applications, but their large-scale adoption has been hampered by several open challenges. The availability of rapidly evolving consumer-grade wearable devices (smart-watch, smart-ring, ear-worns) and the existence of substantial variability in the activities performed by a large number of users � each with their own personal style and demographic variations and the wearables potentially placed in different body positions introduce significant domain and category shifts. The cost-prohibitive nature of developing a large corpus of annotated samples to cover all these heterogeneities is a major hindrance to the development and adoption of scalable supervised HAR models. In response, the recent machine learning literature has increasingly relied on discovering salient features from unlabeled samples. However, these models still impart some restrictions on the model architecture (i.e., inability to handle simultaneous heterogeneities, the requirement of labeled samples or synchronized data collection with multiple sensors, and lack of interpretability). Moreover, their performance still lags behind their supervised counterparts, hindering real-world adoption. In this thesis, we focus on building scalable machine learning models for HAR that are robust against domain shifts with minimal-to-no extra-label information and discover the optimum transferability of the representations between the domains. To that end, we propose a number of deep self-supervised, unsupervised, adversarial representation learning and learnable data augmentation techniques. We first present, AugToAct, a self-supervised representation learning method that utilizes random data transformation with a reconstruction loss to automatically learn salient features from unlabeled samples and retains over 80% F1 score with only 6% labeled samples. We then extend this self-supervised module in a cross-user semi-supervised domain adaptation setup, where it outperforms most state-of-the-art models by a 5% F1 score. In our next work StranGAN, we propose a novel interpretable unsupervised domain adaptation method by adversarially learning a set of affine transformations to align the raw data distributions of source and target domain unlabeled samples without (a) needing to modify the source classifier, (b) having no access to synchronized source and target labeled samples while outperforming state-of-the-art by 5% F1 score. Finally, we present CoDEm, which exploits the domain label meta-data (subjects' gender, sensor position, etc) to learn a set of domain embeddings to capture the salient features pertaining to the underlying heterogeneity. CoDEm provides up to 9.5% improved F1 performance compared to several multi-task learning setups in three public datasets by utilizing these domain embeddings with a novel residual attention mechanism without any loss balancing hyper-parameter search.Item Cross-Modal Scene Networks(IEEE, 2017-09-18) Aytar, Yusuf; Castrejon, Lluis; Vondrick, Carl; Pirsiavash, Hamed; Torralba, AntonioPeople can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.Item A simple baseline for domain adaptation using rotation prediction(2019-12-26) Tejankar, Ajinkya; Pirsiavash, HamedRecently, domain adaptation has become a hot research area with lots of applications. The goal is to adapt a model trained in one domain to another domain with scarce annotated data. We propose a simple yet effective method based on self-supervised learning that outperforms or is on par with most state-of-the-art algorithms, e.g. adversarial domain adaptation. Our method involves two phases: predicting random rotations (self-supervised) on the target domain along with correct labels for the source domain (supervised), and then using self-distillation on the target domain. Our simple method achieves state-of-the-art results on semi-supervised domain adaptation on DomainNet dataset. Further, we observe that the unlabeled target datasets of popular domain adaptation benchmarks do not contain any categories apart from testing categories. We believe this introduces a bias that does not exist in many real applications. We show that removing this bias from the unlabeled data results in a large drop in performance of state-of-the-art methods, while our simple method is relatively robust.