FedMT: Multi-Task Federated Learning with Competitive GPU Resource Sharing
Links to Files
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Yu, Yongbo, Fuxun Yu, Zirui Xu, et al. “FedMT: Multi-Task Federated Learning with Competitive GPU Resource Sharing.” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, August 19, 2025, 1–1. https://doi.org/10.1109/TCAD.2025.3600367.
Rights
© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Abstract
Federated learning (FL) nowadays involves heterogeneous compound learning tasks as cognitive applications’ complexity increases. For example, a self-driving system hosts multiple tasks simultaneously (e.g., detection, classification, segmentation, etc.) and expects FL to retain life-long intelligence involvement. However, our analysis demonstrates that, when deploying compound FL models for multiple training tasks on a GPU, certain issues arise: As different tasks’ skewed data distributions and corresponding models cause highly imbalanced learning workloads, current GPU scheduling methods lack effective resource allocations; Therefore, existing FL schemes, only focusing on heterogeneous data distribution but runtime computing, cannot practically achieve optimally synchronized federation. To address these issues, we propose a full-stack FL optimization scheme to tackle both intra-device GPU scheduling and inter-device FL coordination for multi-task training. Specifically, our works illustrate two key insights in this research domain: Competitive resource sharing is beneficial for parallel model executions, and the proposed concept of “virtual resource” could effectively characterize and guide the practical per-task resource utilization and allocation; Additionally, architectural-level coordination improves FL performance by aligning task workloads with GPU utilization. Our experiments demonstrate that the FL performance could be significantly escalated. Specifically, we observed a 2.16×–2.38× increase in intra-device GPU training throughput and a 2.53×–2.80× boost in inter-device FL coordination efficiency across diverse multi-task scenarios.
