FedProphet, a novel federated adversarial training framework, can achieve memory efficiency, adversarial robustness, and objective consistency simultaneously by partitioning the large model into small cascaded modules, deriving strong convexity regularization to guarantee robustness, and coordinating the local training of clients based on their hardware resources.
연합 학습과 다중 과제 학습의 장점을 결합하여 다양한 데이터 분포와 과제 유형에서 협력적으로 모델을 학습할 수 있는 연합 다중 과제 학습 기법을 제안하고 이를 체계적으로 평가한다.
This paper introduces a comprehensive benchmark, FMTL-Bench, to systematically evaluate the Federated Multi-Task Learning (FMTL) paradigm by considering data, model, and optimization algorithm levels. The benchmark covers various non-IID data partitioning scenarios and provides valuable insights into the strengths and limitations of existing baseline methods for optimal FMTL application.
The core message of this paper is to propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa, to address the challenges in current FedADMM methods, including the need to empirically set the local training accuracy and the penalty parameter.
Federated Dual Prompt Tuning (Fed-DPT) is a novel federated learning approach that leverages prompt tuning techniques for both visual and textual inputs to address the challenges of domain shift and communication efficiency in federated learning.
Clients with more diverse data can improve the performance of federated learning models. By emphasizing updates from high-diversity clients and diminishing the influence of low-diversity clients, the proposed WeiAvgCS framework can significantly improve the convergence speed of federated learning.
Centaur, an end-to-end federated learning framework, enhances efficiency in multidevice federated learning by incorporating on-device data selection and partition-based model training to address the resource constraints of ubiquitous constrained devices.
AdaptiveFL, a novel federated learning approach, can generate and adaptively dispatch heterogeneous models to resource-constrained AIoT devices, achieving better inference performance than state-of-the-art methods.
The core message of this paper is to propose a novel federated learning algorithm named MimiC that can effectively combat the negative impacts of arbitrary client dropouts by modifying the received model updates to mimic an imaginary central update.
Heterogeneous data in federated learning leads to dimensional collapse in both global and local models, which can be effectively mitigated by the proposed FEDDECORR method.