Long-Tailed Federated Learning in Internet of Medical Things Based on Ensemble Distillation and Imbalanced Calibration
Loading...
Links to Files
Author/Creator
Author/Creator ORCID
Date
2025-01-31
Type of Work
Department
Program
Citation of Original Publication
Jiang, Bin, Yuchen Shang, Guanghui Yue, Huihui Helen Wang, and Houbing Herbert Song. "Long-Tailed Federated Learning in Internet of Medical Things Based on Ensemble Distillation and Imbalanced Calibration". IEEE Transactions on Consumer Electronics, 2025, 1–1. https://doi.org/10.1109/TCE.2025.3537062.
Rights
© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects
Privacy computing
Tail
Training
Distributed databases
Servers
Analytical models
Federated learning
Client scoring
UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
Computational modeling
Ensemble distillation
Internet of Medical Things
Data privacy
Long-tailed data
Heavily-tailed distribution
Data models
Tail
Training
Distributed databases
Servers
Analytical models
Federated learning
Client scoring
UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
Computational modeling
Ensemble distillation
Internet of Medical Things
Data privacy
Long-tailed data
Heavily-tailed distribution
Data models
Abstract
The Internet of Medical Things (IoMT) has a promising future, as its devices can monitor vital signs, offer treatment guidance, and perform real-time diagnostics using AI and wireless communication technologies. However, due to the difficulty of collecting patient data on a large scale and potential privacy risks, traditional centralized machine learning methods are often challenging to apply in IoMT devices. Federated learning, as a privacy-preserving technology, aims to build high-quality deep learning models across distributed clients while protecting data privacy. However, current popular federated learning methods exhibit suboptimal performance when dealing with non-IIDness data, especially in the case of long-tail class distributions, leading to unsatisfactory results. Additionally, due to privacy constraints on distributed clients, these methods cannot leverage traditional deep learning techniques to handle long-tail data, which is often characterized by long-tail heterogeneous distributions in IoMT. To address these challenges, this paper proposes a solution of Privacy-preserving Computing Client Scoring and Knowledge Distillation (FedLT+SKD). The method uses privacy protection computation to provide prior knowledge of global data class distribution while ensuring data privacy. Based on this prior knowledge, it employs a points-based sampling strategy to identify clients that perform well on long tail data and uploads their local model to the server. On the server side, the robustness of the global model is enhanced by collection distillation and imbalance correction. We verify the effectiveness of this method on the medical datasets ISIC, ChestX-ray14, MRI and also on the traditional datasets CIFAR-10-LT and CIFAR-100-LT, and the experimental results show that the method is superior to the popular federation and long-tail learning methods.