Core Concepts
The authors developed FedFMS to address the challenges of training foundation models for medical image segmentation in a federated learning framework. Their approach involved creating Federated SAM (FedSAM) and a communication-efficient FedSAM with Medical SAM Adapter (FedMSA).
Abstract
The study introduces FedFMS, including FedSAM and FedMSA, to explore the performance of federated foundation models for medical image segmentation. The experiments conducted demonstrate comparable results to centralized training methods while maintaining privacy and enhancing communication efficiency.
The research collected diverse datasets for benchmarking, developed a federated learning framework based on SAM, and investigated the impact of pre-training on model effectiveness. Results showed promising performance across various tasks, highlighting the potential of using foundation models in privacy-preserving federated learning frameworks.
Efficiency analysis revealed that FedMSA reduced parameter count and FLOPs compared to FedSAM, leading to lower communication and training costs. Additionally, an ablation study emphasized the importance of pre-training knowledge from SAM for achieving advanced performance in federated learning scenarios.
Overall, the study contributes to advancing federated learning applications in medical imaging by introducing efficient models like FedSAM and FedMSA within the domain.
Stats
14.7 B learnable parameters in FedMSA
93.7 B learnable parameters in FedSAM
739.9 min average training time for FedMSA
911.4 min average training time for FedSAM
52,274 MiB GPU memory usage for FedMSA
58,478 MiB GPU memory usage for FedSAM
5.7 T FLOPs for forward propagation in FedMSA
13.4 T FLOPs for forward propagation in FedSAM
Quotes
"We propose a solution to deploy the foundation model SAM within the federated learning framework."
"Our study is the first to introduce foundation models for federated learning in the medical image domain."