Show simple item record

dc.contributor.authorKhan, Md Abdullah Al Hafiz
dc.contributor.authorRoy, Nirmalya
dcterms.creatorhttps://orcid.org/0000-0002-6180-1501en_US
dc.date.accessioned2022-09-19T17:05:18Z
dc.date.available2022-09-19T17:05:18Z
dc.date.issued2022-07-01
dc.description.abstractActivity Recognition (AR) models perform well with a large number of available training instances. However, in the presence of sensor heterogeneity, sensing biasness and variability of human behaviors and activities and unseen activity classes pose key challenges to adopting and scaling these pre-trained activity recognition models in the new environment. These challenging unseen activities recognition problems are addressed by applying transfer learning techniques that leverage a limited number of annotated samples and utilize the inherent structural patterns among activities within and across the source and target domains. This work proposes a novel AR framework that uses the pretrained deep autoencoder model and generates features from source and target activity samples. Furthermore, this AR framework establishes correlations among activities between the source and target domain by exploiting intra- and inter-class knowledge transfer to mitigate the number of labeled samples and recognize unseen activities in the target domain. We validated the efficacy and effectiveness of our AR framework with three real-world data traces (Daily and Sports, Opportunistic, and Wisdm) that contain 41 users and 26 activities in total. Our AR framework achieves performance gains ≈ 5-6% with 111, 18, and 70 activity samples (20% annotated samples) for Das, Opp, and Wisdm datasets. In addition, our proposed AR framework requires 56, 8, and 35 fewer activity samples (10% fewer annotated examples) for Das, Opp, and Wisdm, respectively, compared to the state-of-the-art Untran model.en_US
dc.description.urihttps://ieeexplore.ieee.org/document/9842497en_US
dc.format.extent10 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepostprintsen_US
dc.identifierdoi:10.13016/m2hjbg-l2sy
dc.identifier.citationM. A. Al Hafiz Khan and N. Roy, "Cross-Domain Unseen Activity Recognition Using Transfer Learning," 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), 2022, pp. 684-693, doi: 10.1109/COMPSAC54236.2022.00117.en_US
dc.identifier.urihttps://doi.org/10.1109/COMPSAC54236.2022.00117
dc.identifier.urihttp://hdl.handle.net/11603/25733
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rights© 2022 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.titleCross-Domain Unseen Activity Recognition Using Transfer Learningen_US
dc.typeTexten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record