Concetti Chiave
Optimizing federated learning in heterogeneous environments through pruning and recovery techniques.
Sintesi
This article introduces a novel federated learning framework that addresses inefficiencies in traditional algorithms by combining asynchronous learning and pruning techniques. The framework aims to improve model training efficiency while maintaining accuracy. It also enhances the aggregation process and communication overhead reduction. Experimental results show significant improvements compared to conventional methods.
Abstract:
Novel federated learning framework for heterogeneous environments.
Combines asynchronous learning and pruning techniques.
Improves model training efficiency while preserving accuracy.
Enhancements in the aggregation process and communication overhead reduction.
Introduction:
Existing FL algorithms assume homogeneous client scenarios.
Challenges with resource-constrained devices in real-world applications.
Asynchronous Federated Learning (AFL) proposed as a solution.
Various approaches like HeteroFL, FedDF, ScaleFL, DepthFL, FedMP discussed.
Federated Learning based on Pruning and Recovery:
Assigns smaller models to clients with limited resources.
Uses an asynchronous approach to avoid over-pruning models.
Model recovery on resource-constrained clients improves overall accuracy.
Improvements in the aggregation process and communication reduction.
Background:
Synchronized Federal Learning Time Stream Analysis explained.
Issues with update staleness and unbalanced training addressed.
PR-FL Time Stream Analysis:
Two stages: adjusting pruning ratio based on client performance time, gradual restoration of pruned models.
Differential Model Distribution:
New model distribution paradigm proposed to reduce redundant transmissions from the server to clients.
Experiments:
Evaluation on image classification tasks like Conv2 on FEMNIST and VGG11 on CIFAR10 datasets.
Ablation Study Evaluation Metrics:
Evaluation of different components of PR-FL like synPR-FI, nobuffPR-FI, fedavgPR-FI, noResPR-FI, noRecoverPR-FI.
Statistiche
"Various drawbacks arise when applying classical FL to resource-constrained devices."
"Asynchronous schemes are very effective in dealing with dropouts."
"Experiments across various datasets demonstrate significant reductions in training time."
Citazioni
"A more complete model is assigned to clients with stronger performance."
"The global model may be biased towards certain clients' data distributions."
"Model recovery contributes to achieving higher accuracy."