Core Concepts
Dual-level prototype clustering and α-sparsity loss mitigate cross-domain variance in Federated Prototype Learning.
Abstract
Abstract:
FL enables collaborative ML without data sharing.
FedPL addresses heterogeneous data domains using prototypes.
Introduction:
FL faces challenges due to non-IID datasets.
Existing methods focus on label skew, not domain heterogeneity.
Proposed Method (FedPLVM):
Dual-level prototype clustering captures variance information.
α-sparsity loss enhances inter-class similarity and reduces intra-class similarity.
Experiments:
Evaluated on Digit-5, Office-10, and DomainNet datasets.
Outperformed existing approaches in accuracy across domains.
Ablation Study:
Impact of dual-level prototype generation and temperature τ analyzed.
α-sparsity prototype loss significantly improves performance.
Stats
複数の研究による改善が示されています。
データセットでの提案手法の優越性が示されています。