Core Concepts
The core message of this work is to propose an efficient and effective Vertical Federated Learning (VFL) optimization framework with multiple heads (VIM) that leverages the Alternating Direction Method of Multipliers (ADMM) to reduce communication costs and improve performance under differential privacy.
Abstract
The content describes a novel VFL framework called VIM that addresses the main challenges faced by existing VFL frameworks. The key highlights are:
VIM introduces multiple heads in the server model, where each head corresponds to one local client. This enables a thorough decomposition of the VFL optimization problem into multiple subproblems that can be iteratively solved by the server and the clients.
The authors propose an ADMM-based method called VIMADMM to solve the VIM optimization problem. VIMADMM allows clients to conduct multiple local updates before communication, which reduces the communication cost and leads to better performance under differential privacy.
The authors provide theoretical analysis on the convergence of VIMADMM and prove that it can converge to stationary points under mild assumptions.
To protect the privacy of local features held by clients, the authors introduce client-level differential privacy mechanisms and prove the privacy guarantees.
Extensive experiments on four diverse datasets show that VIMADMM and its variant VIMADMM-J outperform state-of-the-art VFL methods in terms of faster convergence, higher accuracy, and higher utility under client-level differential privacy and label differential privacy.
The authors also demonstrate that a byproduct of VIM is that the weights of learned heads reflect the importance of local clients, enabling functionalities such as client-level explanation, client denoising, and client summarization.
Stats
The number of communication rounds required for VIMADMM to converge is significantly fewer than the baselines on all four datasets.
VIMADMM achieves higher test accuracy compared to the baselines on all four datasets.
Quotes
"To solve the above challenges, in this work, we propose an efficient VFL optimization framework with multiple heads (VIM), where each head corresponds to one local client."
"We provide the client-level DP mechanism for our VIM framework to protect user privacy."
"We conduct extensive evaluations and show that on four vertical FL datasets, VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art."