ASYN2F introduces a novel approach to federated learning by allowing asynchronous updates of the global model and local models, resulting in improved performance. The framework addresses practical implementation requirements such as communication protocols and storage services. Extensive experiments demonstrate the effectiveness and scalability of ASYN2F compared to state-of-the-art techniques.
The proliferation of IoT devices has led to a massive collection of data, prompting the need for efficient representation and generalization methods.
Federated learning systems consist of a server and distributed training workers, focusing on optimizing global model performance while reducing communication costs.
Existing works in federated learning mainly focus on server-side aggregation methods, overlooking practical implementation challenges related to data privacy and security policies.
ASYN2F introduces bidirectional model aggregation, allowing for asynchronous updates between the server and workers during training epochs.
The framework leverages advanced message queuing protocols for efficient communication and cloud storage for model management.
Experimental results show that ASYN2F outperforms existing techniques in terms of model performance, practicality, and scalability.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Tien-Dung Ca... a las arxiv.org 03-05-2024
https://arxiv.org/pdf/2403.01417.pdfConsultas más profundas