Kernekoncepter
This paper introduces a comprehensive benchmark, FMTL-Bench, to systematically evaluate the Federated Multi-Task Learning (FMTL) paradigm by considering data, model, and optimization algorithm levels. The benchmark covers various non-IID data partitioning scenarios and provides valuable insights into the strengths and limitations of existing baseline methods for optimal FMTL application.
Resumé
The paper introduces a comprehensive benchmark called FMTL-Bench to systematically evaluate the Federated Multi-Task Learning (FMTL) paradigm. FMTL combines the advantages of Federated Learning (FL) and Multi-Task Learning (MTL), enabling collaborative model training on multi-task learning datasets while ensuring data locality.
The key aspects of the FMTL-Bench are:
Data Level:
- Seven sets of comparative experiments covering various independent and identically distributed (IID) and non-independent and identically distributed (Non-IID) data partitioning scenarios.
- The scenarios consider different numbers and types of MTL tasks from various domains that a client may train.
Model Level:
- Examination of single-task learning models and MTL models based on either multi-decoder (MD) or single-decoder (TC) architectures.
- Experiments conducted using network backbones of different sizes.
Optimization Algorithm Level:
- Evaluation of nine baseline algorithms encompassing local training, FL, MTL, and FMTL algorithms.
- Algorithms leverage optimization based on either model parameters or accumulated gradients, and some employ a parameter decoupling strategy.
The paper conducts a comprehensive case study using the IID-1 SDMT scenario to assess the performance of the baseline methods across various evaluation criteria, including task-specific metrics, comprehensive multi-task performance, communication cost, training time, and energy consumption.
The extensive experiments and case study provide valuable insights into the strengths and limitations of existing FMTL baseline methods, contributing to the ongoing discourse on optimal FMTL application in practical scenarios.
Statistik
The average task performance improvement (Δ%) of FedAvg is 14.39% globally and 25.08% locally in the IID-1 SDMT scenario.
The average task performance improvement (Δ%) of FedRep is 4.43% globally and 7.36% locally in the IID-1 SDMT scenario.
The average task performance improvement (Δ%) of MaT-FL is 4.79% globally and 9.30% locally in the IID-1 SDMT scenario.
The average task performance improvement (Δ%) of FedAvg is -9.13% globally and -6.27% locally in the NIID-2 SDST scenario.
The average task performance improvement (Δ%) of FedRep is 0.46% globally and 1.68% locally in the NIID-2 SDST scenario.
The average task performance improvement (Δ%) of MaT-FL is 0.58% globally and 2.59% locally in the NIID-2 SDST scenario.
Citater
"FMTL enables a single model to learn multiple tasks in a privacy-preserving, distributed machine learning environment, thereby inheriting and amplifying the challenges of both FL and MTL."
"We meticulously consider the data, model, and optimization algorithm to design seven sets of comparative experiments."
"We glean insights from comparative experiments and case analyses, and provide application suggestions for FMTL scenarios."