The article addresses the challenge of fairly training multiple machine learning models concurrently in a federated learning (FL) setting, where clients collaboratively train the models while keeping their data local.
The key highlights are:
The authors propose FedFairMMFL, a difficulty-aware client-task allocation algorithm that dynamically assigns clients to tasks based on the current performance levels of all tasks. This aims to achieve fairness in terms of the converged accuracies or training times across the tasks.
The authors provide theoretical guarantees on the fairness and convergence of FedFairMMFL. They show that as the fairness parameter α increases, the algorithm preferentially accelerates the convergence of more difficult tasks, helping to equalize the training performance.
The authors then consider the case where clients may have unequal interest in training different tasks and need to be incentivized. They propose a max-min fair auction mechanism to ensure a fair distribution of incentivized clients across tasks.
The authors demonstrate through experiments that their algorithms achieve higher minimum accuracy across tasks compared to baseline approaches, while maintaining the same or higher accuracy for the other tasks.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問