FedHiP: Heterogeneity-Invariant Personalized Federated Learning Through Closed-Form Solutions

dc.contributor.authorTang, Jianheng
dc.contributor.authorYang, Zhirui
dc.contributor.authorWang, Jingchao
dc.contributor.authorFan, Kejia
dc.contributor.authorXu, Jinfeng
dc.contributor.authorZhuang, Huiping
dc.contributor.authorLiu, Anfeng
dc.contributor.authorSong, Houbing
dc.contributor.authorWang, Leye
dc.contributor.authorLiu, Yunhuai
dc.date.accessioned2025-09-18T14:22:16Z
dc.date.issued2025-08-06
dc.description.abstractLately, Personalized Federated Learning (PFL) has emerged as a prevalent paradigm to deliver personalized models by collaboratively training while simultaneously adapting to each client's local applications. Existing PFL methods typically face a significant challenge due to the ubiquitous data heterogeneity (i.e., non-IID data) across clients, which severely hinders convergence and degrades performance. We identify that the root issue lies in the long-standing reliance on gradient-based updates, which are inherently sensitive to non-IID data. To fundamentally address this issue and bridge the research gap, in this paper, we propose a Heterogeneity-invariant Personalized Federated learning scheme, named FedHiP, through analytical (i.e., closed-form) solutions to avoid gradient-based updates. Specifically, we exploit the trend of self-supervised pre-training, leveraging a foundation model as a frozen backbone for gradient-free feature extraction. Following the feature extractor, we further develop an analytic classifier for gradient-free training. To support both collective generalization and individual personalization, our FedHiP scheme incorporates three phases: analytic local training, analytic global aggregation, and analytic local personalization. The closed-form solutions of our FedHiP scheme enable its ideal property of heterogeneity invariance, meaning that each personalized model remains identical regardless of how non-IID the data are distributed across all other clients. Extensive experiments on benchmark datasets validate the superiority of our FedHiP scheme, outperforming the state-of-the-art baselines by at least 5.79%-20.97% in accuracy.
dc.description.urihttp://arxiv.org/abs/2508.04470
dc.format.extent11 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m2tw6b-ydpd
dc.identifier.urihttps://doi.org/10.48550/arXiv.2508.04470
dc.identifier.urihttp://hdl.handle.net/11603/40215
dc.language.isoen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectComputer Science - Machine Learning
dc.subjectUMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
dc.titleFedHiP: Heterogeneity-Invariant Personalized Federated Learning Through Closed-Form Solutions
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0003-2631-9223

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2508.04470v1.pdf
Size:
7.49 MB
Format:
Adobe Portable Document Format