Core Concepts
Federated Co-Training enhances privacy in collaborative machine learning by sharing hard labels on an unlabeled dataset, improving privacy substantially while maintaining model quality.
Abstract
Federated learning allows collaborative training without sharing sensitive data directly.
Federated Co-Training (FEDCT) shares hard labels on an unlabeled dataset to improve privacy.
FEDCT achieves model quality comparable to Federated Averaging (FEDAVG) and Distributed Distillation (DD) while enhancing privacy.
FEDCT enables training of interpretable models like decision trees, XGBoost, and random forests in a federated setup.
Empirical evaluations show FEDCT's effectiveness on various datasets and its scalability with the number of clients.
FEDCT's impact on healthcare and chronic disease management is significant, addressing privacy concerns in collaborative training.
Stats
Federated learning allows us to collaboratively train models without pooling sensitive data directly.
FEDCT shares hard labels on an unlabeled dataset to improve privacy substantially.
FEDCT achieves a test accuracy comparable to FEDAVG and DD while enhancing privacy.
Quotes
"Sharing hard labels substantially improves privacy over sharing model parameters."
"FEDCT achieves a model quality comparable to federated learning while improving privacy."