MediNet: Self-Supervised Framework for Multimodal Analysis and Patient Care in IoMT
Links to Files
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Awan, Kamran Ahmad, Sonia Khan, Korhan Cengiz, Houbing Song, and Ibrahim Alrashdi. “MediNet: Self-Supervised Framework for Multimodal Analysis and Patient Care in IoMT.” IEEE Internet of Things Journal, September 25, 2025, 1–1. https://doi.org/10.1109/JIOT.2025.3614207.
Rights
© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects
Feature extraction
Contrastive learning
Privacy Preserving
Imaging
UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
Scalability
Self-Supervised
Real-time systems
Patient monitoring
Internet of Medical Things
Edge computing
Mathematical models
Training
Patient Monitoring
Multimodal Analysis
Heuristic algorithms
Contrastive learning
Privacy Preserving
Imaging
UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
Scalability
Self-Supervised
Real-time systems
Patient monitoring
Internet of Medical Things
Edge computing
Mathematical models
Training
Patient Monitoring
Multimodal Analysis
Heuristic algorithms
Abstract
Medical data analysis presents major challenges due to multimodal characteristics, real-time processing demands, and scalability constraints. Existing methods face limitations in managing data heterogeneity and generating timely outputs, which restrict their applicability in clinical environments. This study introduces MediNet, a self-supervised framework that addresses these challenges by integrating multimodal analysis, temporal monitoring, and decentralized processing within the Internet of Medical Things (IoMT). MediNet employs an adaptive self-supervised learning strategy, cross-modal contrastive learning, and edge computing to enable efficient real-time analytics. Temporal data streams are processed using dynamic monitoring modules for anomaly detection, while multimodal integration supports personalized patient care. The framework was implemented in PyTorch and evaluated on the BraTS 2021, CheXpert, ISIC 2018, and MIMIC-IV datasets using an NVIDIA A100 GPU. Experimental results show that MediNet consistently outperforms baseline models, achieving a generalization performance of 94.8% on BraTS 2021 and an inference latency of 74 ms on ISIC 2018.
