OledFL, a novel approach using an opposite lookahead enhancement technique, significantly improves the convergence speed and generalization performance of decentralized federated learning (DFL) by addressing client inconsistency.
pFedGame, a novel decentralized federated learning algorithm, leverages game theory to achieve efficient model aggregation in dynamic network environments, addressing challenges like data heterogeneity and the absence of a central server.
Decentralized federated learning (DFL) is an emerging framework that eliminates the need for a central server, enabling direct communication and knowledge sharing among clients to improve privacy, efficiency, and resource utilization.
The core message of this paper is to propose an adaptive decentralized federated learning (DFL) framework that optimizes the number of local training rounds across diverse devices with varying resource budgets, in order to enhance the model performance while considering energy and latency constraints.
分散フェデレーテッドラーニングにおける効率的な通信を重視した新しいピア選択ベースのOCD-FLの提案とその効果的な実装。
The author argues that Blockchain-based Decentralized Federated Learning (BDFL) enhances model verification and trustworthiness in decentralized machine learning systems by leveraging a blockchain infrastructure.
The author proposes OCD-FL as a novel scheme for decentralized federated learning, focusing on communication efficiency and energy consumption reduction.