P-Mix: A Data Augmentation Method for Contrastive Learning based Human Activity Recognition
| dc.contributor.author | Chen, Yingjie | |
| dc.contributor.author | Xie, Qi | |
| dc.contributor.author | Cui, Wenxuan | |
| dc.contributor.author | Chen, Liming | |
| dc.contributor.author | Song, Houbing | |
| dc.contributor.author | Zhu, Tao | |
| dc.date.accessioned | 2025-10-03T19:33:56Z | |
| dc.date.issued | 2025-08-25 | |
| dc.description.abstract | Supervised human activity recognition (HAR) with sensor data typically demands substantial labeled datasets to train robust models. Contrastive learning offers a self-supervised alternative by leveraging data augmentation to improve representation learning. However, most existing augmentation methods operate independently on either the time or channel dimension and often introduce unstructured noise, which can distort meaningful temporal and spectral patterns. To address these limitations, we present a novel P-Mix data augmentation method for contrastive learning in HAR tasks, specifically designed to be compatible with the SimCLR framework. P-Mix is a customized data augmentation method tailored to sensor data for human activity recognition, which slices and recombines both the time and channel dimensions, merging multiple temporal segments to encourage the model to explore the underlying relationships and variations in the data in an unsupervised setting. To capture motion cycles and long-term dependencies, we employ shorter temporal segments as fundamental processing units along the time dimension. By incorporating structured noise patterns based on motion cycle characteristics within these segments, we effectively enhance the model’s robustness and generalization capabilities. Extensive evaluations across five HAR benchmarks demonstrate that P-Mix achieves consistent improvements over the strongest baseline (Resample), delivering relative F1-score gains ranging from 1.87% (USC-HAD: 85.63% vs 83.93%) to 6.53% (DSADS: 97.24% vs 91.28%) through controlled multidimensional fusion. These results demonstrate the effectiveness of our approach in optimizing data generation and augmentation strategies for HAR tasks. | |
| dc.description.sponsorship | This work was supported in part by the National Natural Science Foundation of China (62006110), the Natural Science Foundation of Hunan Province (2024JJ7428, 2023JJ30518) and the Scientific research project of Hunan Provincial Department of Education (22C0229).(Corresponding author:Tao Zhu.) | |
| dc.description.uri | https://ieeexplore.ieee.org/document/11137374 | |
| dc.format.extent | 13 pages | |
| dc.genre | journal articles | |
| dc.genre | postprints | |
| dc.identifier | doi:10.13016/m2elqm-f5we | |
| dc.identifier.citation | Chen, Yingjie, Qi Xie, Wenxuan Cui, Liming Chen, Houbing Herbert Song, and Tao Zhu. “P-Mix: A Data Augmentation Method for Contrastive Learning Based Human Activity Recognition.” IEEE Transactions on Artificial Intelligence, August 25, 2025, 1–13. https://doi.org/10.1109/TAI.2025.3601599. | |
| dc.identifier.uri | http://doi.org/10.1109/TAI.2025.3601599 | |
| dc.identifier.uri | http://hdl.handle.net/11603/40371 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Information Systems Department | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |
| dc.subject | Activity Recognition | |
| dc.subject | Computer science | |
| dc.subject | Self-supervised Learning | |
| dc.subject | Contrastive Learning | |
| dc.subject | Data Augmentation | |
| dc.subject | Artificial intelligence | |
| dc.subject | Motion segmentation | |
| dc.subject | Noise | |
| dc.subject | Data augmentation | |
| dc.subject | Data mining | |
| dc.subject | Feature extraction | |
| dc.subject | Contrastive learning | |
| dc.subject | Sensor Data | |
| dc.subject | Human activity recognition | |
| dc.subject | UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab) | |
| dc.subject | Data models | |
| dc.title | P-Mix: A Data Augmentation Method for Contrastive Learning based Human Activity Recognition | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0003-2631-9223 |
Files
Original bundle
1 - 1 of 1
