FedHiP: Heterogeneity-Invariant Personalized Federated Learning Through Closed-Form Solutions
| dc.contributor.author | Tang, Jianheng | |
| dc.contributor.author | Yang, Zhirui | |
| dc.contributor.author | Wang, Jingchao | |
| dc.contributor.author | Fan, Kejia | |
| dc.contributor.author | Xu, Jinfeng | |
| dc.contributor.author | Zhuang, Huiping | |
| dc.contributor.author | Liu, Anfeng | |
| dc.contributor.author | Song, Houbing | |
| dc.contributor.author | Wang, Leye | |
| dc.contributor.author | Liu, Yunhuai | |
| dc.date.accessioned | 2025-09-18T14:22:16Z | |
| dc.date.issued | 2025-08-06 | |
| dc.description.abstract | Lately, Personalized Federated Learning (PFL) has emerged as a prevalent paradigm to deliver personalized models by collaboratively training while simultaneously adapting to each client's local applications. Existing PFL methods typically face a significant challenge due to the ubiquitous data heterogeneity (i.e., non-IID data) across clients, which severely hinders convergence and degrades performance. We identify that the root issue lies in the long-standing reliance on gradient-based updates, which are inherently sensitive to non-IID data. To fundamentally address this issue and bridge the research gap, in this paper, we propose a Heterogeneity-invariant Personalized Federated learning scheme, named FedHiP, through analytical (i.e., closed-form) solutions to avoid gradient-based updates. Specifically, we exploit the trend of self-supervised pre-training, leveraging a foundation model as a frozen backbone for gradient-free feature extraction. Following the feature extractor, we further develop an analytic classifier for gradient-free training. To support both collective generalization and individual personalization, our FedHiP scheme incorporates three phases: analytic local training, analytic global aggregation, and analytic local personalization. The closed-form solutions of our FedHiP scheme enable its ideal property of heterogeneity invariance, meaning that each personalized model remains identical regardless of how non-IID the data are distributed across all other clients. Extensive experiments on benchmark datasets validate the superiority of our FedHiP scheme, outperforming the state-of-the-art baselines by at least 5.79%-20.97% in accuracy. | |
| dc.description.uri | http://arxiv.org/abs/2508.04470 | |
| dc.format.extent | 11 pages | |
| dc.genre | journal articles | |
| dc.genre | preprints | |
| dc.identifier | doi:10.13016/m2tw6b-ydpd | |
| dc.identifier.uri | https://doi.org/10.48550/arXiv.2508.04470 | |
| dc.identifier.uri | http://hdl.handle.net/11603/40215 | |
| dc.language.iso | en | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Information Systems Department | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
| dc.subject | Computer Science - Machine Learning | |
| dc.subject | UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab) | |
| dc.title | FedHiP: Heterogeneity-Invariant Personalized Federated Learning Through Closed-Form Solutions | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0003-2631-9223 |
Files
Original bundle
1 - 1 of 1
