This paper proposes AAggFF, a novel framework leveraging online convex optimization (OCO) to improve client-level fairness in federated learning by adaptively adjusting mixing coefficients based on client performance feedback.
WassFFed is a novel framework that enhances fairness in federated learning by minimizing discrepancies in model outputs for different sensitive groups across clients using Wasserstein barycenters and optimal transport.
FedMABA is a novel federated learning algorithm that leverages multi-armed bandits to improve fairness by explicitly constraining performance disparities among clients with diverse data distributions, without compromising the server model's performance.