Conditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearables
Files
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Zhang, Liming; Zhang, Wenbin; Japkowicz, Nathalie; Conditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearables; 25th International Conference on Pattern Recognition;
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2020 IEEE Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
© 2020 IEEE Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects
Abstract
Recognizing human activities from multi-channel time series data collected from wearable sensors has become an important practical application of machine learning. A serious challenge comes from the presence of coherent activities or body movements, such as movements of the head while walking or sitting, since signals representing these movements are mixed and interfere with each other. Basic multi-label classification typically assumes independence within the multiple activities. This is oversimplified and reduces modeling power even when using stateof-the-art deep learning methods. In this paper, we investigate this new problem, which we name “Coherent Human Activity Recognition (Co-HAR)”, that keeps complete conditional dependency between the multiple labels. Additionally, we treat CoHAR as a dense labelling problem that classifies each sample on a time step with multiple coherent labels to provide high-fidelity and duration-sensitive support to high-precision applications. To explicitly model conditional dependency, a novel conditionaware deep architecture “Conditional-UNet” is developed to allow for multiple dense labeling for Co-HAR. We also contribute a first-of-its-kind Co-HAR dataset for head gesture recognition associated with a user’s activity, walking or sitting, to the research community. Extensive experiments on this dataset show that our model outperforms state-of-the-art deep learning methods and achieves up to 92% accuracy on context-based head gesture classification.
