Core Concepts
The paper proposes FedMoE-DA, a novel Federated Learning (FL) framework that leverages the Mixture of Experts (MoE) architecture and a domain-aware aggregation strategy to improve model robustness, personalization, and communication efficiency in FL with heterogeneous data.
Stats
Each client in the study has roughly the same amount of data, set to 500 samples per client.
The number of communication rounds is set to T = 1000.
The number of local training epochs per round is E = 5.
The number of experts is set to Ki = 4 for all clients.