Cross-Domain Unseen Activity Recognition Using Transfer Learning
dc.contributor.author | Faridee, Abu Zaher Md | |
dc.contributor.author | Chakma, Avijoy | |
dc.contributor.author | Hasan, Zahid | |
dc.contributor.author | Roy, Nirmalya | |
dc.contributor.author | Misra, Archan | |
dc.date.accessioned | 2022-09-19T17:05:18Z | |
dc.date.available | 2022-09-19T17:05:18Z | |
dc.date.issued | 2022-07-14 | |
dc.description.abstract | We explore the effect of auxiliary labels in improving the classification accuracy of wearable sensor-based human activity recognition (HAR) systems, which are primarily trained with the supervision of the activity labels (e.g. running, walking, jumping). Supplemental meta-data are often available during the data collection process such as body positions of the wearable sensors, subjects’ demographic information (e.g. gender, age), and the type of wearable used (e.g. smartphone, smart-watch). This information, while not directly related to the activity classification task, can nonetheless provide auxiliary supervision and has the potential to significantly improve the HAR accuracy by providing extra guidance on how to handle the introduced sample heterogeneity from the change in domains (i.e positions, persons, or sensors), especially in the presence of limited activity labels. However, integrating such meta-data information in the classification pipeline is non-trivial – (i) the complex interaction between the activity and domain label space is hard to capture with a simple multi-task and/or adversarial learning setup, (ii) meta-data and activity labels might not be simultaneously available for all collected samples. To address these issues, we propose a novel framework Conditional Domain Embeddings (CoDEm). From the available unlabeled raw samples and their domain meta-data, we first learn a set of domain embeddings using a contrastive learning methodology to handle inter-domain variability and inter-domain similarity. To classify the activities, CoDEm then learns the label embeddings in a contrastive fashion, conditioned on domain embeddings with a novel attention mechanism, enforcing the model to learn the complex domain-activity relationships. We extensively evaluate CoDEm in three benchmark datasets against a number of multitask and adversarial learning baselines and achieve state-of-theart performance in each avenue. | en_US |
dc.description.uri | https://ieeexplore.ieee.org/document/9842497 | en_US |
dc.format.extent | 10 pages | en_US |
dc.genre | conference papers and proceedings | en_US |
dc.genre | postprints | en_US |
dc.identifier | doi:10.13016/m2hjbg-l2sy | |
dc.identifier.citation | M. A. Al Hafiz Khan and N. Roy, "Cross-Domain Unseen Activity Recognition Using Transfer Learning," 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), 2022, pp. 684-693, doi: 10.1109/COMPSAC54236.2022.00117. | en_US |
dc.identifier.uri | https://doi.org/10.1109/COMPSAC54236.2022.00117 | |
dc.identifier.uri | http://hdl.handle.net/11603/25733 | |
dc.language.iso | en_US | en_US |
dc.publisher | IEEE | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Information Systems Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Center for Real-time Distributed Sensing and Autonomy | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.title | Cross-Domain Unseen Activity Recognition Using Transfer Learning | en_US |
dc.type | Text | en_US |
dcterms.creator | https://orcid.org/0000-0002-8324-1197 | |
dcterms.creator | https://orcid.org/0000-0002-8495-0948 |