Core Concepts
This paper proposes a federated learning algorithm based on the proximal augmented Lagrangian method to solve constrained machine learning problems with convex global and local constraints.
Abstract
The paper addresses the problem of federated learning (FL) for constrained machine learning (ML) problems, where the objective and constraints are in the finite-sum form. The authors propose a new FL algorithm based on the proximal augmented Lagrangian (AL) method to solve such constrained ML problems.
The key highlights are:
The proposed algorithm is the first to solve general constrained ML problems in an FL setting, with theoretical guarantees on the worst-case complexity.
An ADMM-based inexact solver is developed to solve the unconstrained subproblems arising in the proximal AL method, with a new verifiable termination criterion and global linear convergence guarantees.
Numerical experiments on Neyman-Pearson classification and fairness-aware learning problems with real-world datasets demonstrate the effectiveness of the proposed FL algorithm compared to a centralized proximal AL method.
Stats
The objective function in the Neyman-Pearson classification problem (Eq. (30)) is the average logistic loss for class 0, with an upper bound constraint on the loss for class 1.
The objective function in the fairness-aware learning problem (Eq. (32)) is the average logistic loss, with constraints on the loss disparity between two subgroups.