Conditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearables

dc.contributor.authorZhang, Liming
dc.contributor.authorZhang, Wenbin
dc.contributor.authorJapkowicz, Nathalie
dc.date.accessioned2020-11-17T19:25:26Z
dc.date.available2020-11-17T19:25:26Z
dc.description25th International Conference on Pattern Recognitionen
dc.description.abstractRecognizing human activities from multi-channel time series data collected from wearable sensors has become an important practical application of machine learning. A serious challenge comes from the presence of coherent activities or body movements, such as movements of the head while walking or sitting, since signals representing these movements are mixed and interfere with each other. Basic multi-label classification typically assumes independence within the multiple activities. This is oversimplified and reduces modeling power even when using stateof-the-art deep learning methods. In this paper, we investigate this new problem, which we name “Coherent Human Activity Recognition (Co-HAR)”, that keeps complete conditional dependency between the multiple labels. Additionally, we treat CoHAR as a dense labelling problem that classifies each sample on a time step with multiple coherent labels to provide high-fidelity and duration-sensitive support to high-precision applications. To explicitly model conditional dependency, a novel conditionaware deep architecture “Conditional-UNet” is developed to allow for multiple dense labeling for Co-HAR. We also contribute a first-of-its-kind Co-HAR dataset for head gesture recognition associated with a user’s activity, walking or sitting, to the research community. Extensive experiments on this dataset show that our model outperforms state-of-the-art deep learning methods and achieves up to 92% accuracy on context-based head gesture classification.en
dc.format.extent8 pagesen
dc.genreconference papers and proceedings postprintsen
dc.identifierdoi:10.13016/m2vboa-m2ab
dc.identifier.citationZhang, Liming; Zhang, Wenbin; Japkowicz, Nathalie; Conditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearables; 25th International Conference on Pattern Recognition;en
dc.identifier.urihttp://hdl.handle.net/11603/20074
dc.language.isoenen
dc.publisherIEEEen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rights© 2020 IEEE Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.titleConditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearablesen
dc.typeTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ICPR20.pdf
Size:
4.67 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: