SwinLSTM-EmoRec: A Robust Dual-Modal Emotion Recognition Framework Combining mmWave Radar and Camera for IoT-Enabled Multimedia Applications

Department

Program

Citation of Original Publication

Imran, Naveed, Jian Zhang, Jehad Ali, Sana Hameed, Houbing Herbert Song, and Byeong-hee Roh. "SwinLSTM-EmoRec: A Robust Dual-Modal Emotion Recognition Framework Combining mmWave Radar and Camera for IoT-Enabled Multimedia Applications". IEEE Internet of Things Journal, 2025, 1–1. https://doi.org/10.1109/JIOT.2025.3645907.

Rights

© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

Abstract

Camera-based facial-emotion recognition (FER) suffers from poor lighting, occlusion, and privacy exposure, whereas mmWave-only solutions lack the spatial detail required for fine-grained affect analysis. To close this gap, we present SwinLSTM-EmoRec (Shifted Window Transformer + Long Short-Term Memory Emotion Recognition). This non-contact dual-modal framework fuses micro-Doppler signatures captured by a TI IWR1443 mmWave radar with RGB imagery while treating radar as the primary, identity-obscured source and adaptively limiting reliance on RGB. Privacy is preserved because the cross-attention gate down-weights or bypasses RGB when illumination is poor or when potential identity exposure is detected, leaving decisions dominated by illumination-invariant radar dynamics. A shifted-window Swin Transformer extracts spatial facial cues, an LSTM models temporal radar dynamics, and a lightweight cross-attention layer aligns the two streams, boosting F₁ by up to 4% over early, late, and self-attention baselines. On a 50-participant interactive-gaming dataset recorded under varied lighting and distances of 0.5–2 m, the system achieves 98.5% accuracy (F₁ ≈ 0.98). It maintains 33.9 ms end-to-end latency on a 15 W Jetson Xavier NX edge device. Performance remains > 92% at 2 m, demonstrating robust, privacy-preserving FER robust, privacy-aware emotion sensing suitable for smart-home, tele-health, and e-sports IoT applications.