The article addresses the challenge of multi-label classification under the federated learning setting, where each client only has access to positive data for a single class label. This can lead to model collapsing and poor performance if not addressed properly.
The key highlights and insights are:
The authors propose a novel method called FedALC that exploits label correlations to optimize the class embedding matrix. This helps embeddings of relevant labels stay close and dissimilar labels stay apart, addressing the collapsing issue.
To obtain the label correlations, the authors design an encrypted and communication-efficient strategy to collect label information across clients and construct label distributions on the server.
The authors also propose a variant called FedALC-fixed that learns a fixed class embedding matrix to improve safety and reduce communication overhead.
Extensive experiments on visual and text datasets demonstrate significant improvements over competitive baselines like FedAwS, achieving relative gains of up to 19.3% on the Bibtex dataset.
The authors provide theoretical analysis on the convergence and optimality of the proposed methods.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Xuming An,Du... at arxiv.org 04-25-2024
https://arxiv.org/pdf/2404.15598.pdfDeeper Inquiries