SwinLSTM-EmoRec: A Robust Dual-Modal Emotion Recognition Framework Combining mmWave Radar and Camera for IoT-Enabled Multimedia Applications
| dc.contributor.author | Imran, Naveed | |
| dc.contributor.author | Zhang, Jian | |
| dc.contributor.author | Ali, Jehad | |
| dc.contributor.author | Hameed, Sana | |
| dc.contributor.author | Song, Houbing | |
| dc.contributor.author | Roh, Byeong-hee | |
| dc.date.accessioned | 2026-02-12T16:44:27Z | |
| dc.date.issued | 2025-12-18 | |
| dc.description.abstract | Camera-based facial-emotion recognition (FER) suffers from poor lighting, occlusion, and privacy exposure, whereas mmWave-only solutions lack the spatial detail required for fine-grained affect analysis. To close this gap, we present SwinLSTM-EmoRec (Shifted Window Transformer + Long Short-Term Memory Emotion Recognition). This non-contact dual-modal framework fuses micro-Doppler signatures captured by a TI IWR1443 mmWave radar with RGB imagery while treating radar as the primary, identity-obscured source and adaptively limiting reliance on RGB. Privacy is preserved because the cross-attention gate down-weights or bypasses RGB when illumination is poor or when potential identity exposure is detected, leaving decisions dominated by illumination-invariant radar dynamics. A shifted-window Swin Transformer extracts spatial facial cues, an LSTM models temporal radar dynamics, and a lightweight cross-attention layer aligns the two streams, boosting F₁ by up to 4% over early, late, and self-attention baselines. On a 50-participant interactive-gaming dataset recorded under varied lighting and distances of 0.5–2 m, the system achieves 98.5% accuracy (F₁ ≈ 0.98). It maintains 33.9 ms end-to-end latency on a 15 W Jetson Xavier NX edge device. Performance remains > 92% at 2 m, demonstrating robust, privacy-preserving FER robust, privacy-aware emotion sensing suitable for smart-home, tele-health, and e-sports IoT applications. | |
| dc.description.sponsorship | This work was supported in part by the Brain Korea 21 (BK21) FOUR Program of the National Research Foundation of Korea, funded by the Ministry of Education under Grant NRF5199991514504 | |
| dc.description.uri | https://ieeexplore.ieee.org/abstract/document/11303666/authors | |
| dc.format.extent | 19 pages | |
| dc.genre | journal articles | |
| dc.genre | postprints | |
| dc.identifier | doi:10.13016/m261qf-xi9u | |
| dc.identifier.citation | Imran, Naveed, Jian Zhang, Jehad Ali, Sana Hameed, Houbing Herbert Song, and Byeong-hee Roh. "SwinLSTM-EmoRec: A Robust Dual-Modal Emotion Recognition Framework Combining mmWave Radar and Camera for IoT-Enabled Multimedia Applications". IEEE Internet of Things Journal, 2025, 1–1. https://doi.org/10.1109/JIOT.2025.3645907. | |
| dc.identifier.uri | https://doi.org/10.1109/JIOT.2025.3645907 | |
| dc.identifier.uri | http://hdl.handle.net/11603/41907 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.relation.ispartof | UMBC Information Systems Department | |
| dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works | |
| dc.subject | Multi-modal Sensor Fusion | |
| dc.subject | Radar imaging | |
| dc.subject | Millimeter wave communication | |
| dc.subject | Millimeter Wave Radar Sensor | |
| dc.subject | Long short term memory | |
| dc.subject | IoT-based Emotion Recognition | |
| dc.subject | UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab) | |
| dc.subject | Privacy | |
| dc.subject | Cameras | |
| dc.subject | Non-Contact Human Monitoring | |
| dc.subject | Sensors | |
| dc.subject | Lighting | |
| dc.subject | Radar | |
| dc.subject | Internet of Things | |
| dc.subject | Real-Time Affective Computing | |
| dc.subject | Emotion recognition | |
| dc.title | SwinLSTM-EmoRec: A Robust Dual-Modal Emotion Recognition Framework Combining mmWave Radar and Camera for IoT-Enabled Multimedia Applications | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0003-2631-9223 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- SwinLSTMEmoRecARobustDualModalEmotionRecognitionFramework.pdf
- Size:
- 10.92 MB
- Format:
- Adobe Portable Document Format
