Core Concepts
This paper introduces the concept of Lightweight Vertical Federated Learning (LVFL), which targets both computational and communication efficiencies in the vertical federated learning setting. LVFL employs separate lightweighting strategies for the feature model and feature embedding to improve efficiency.
Abstract
The paper introduces the concept of Lightweight Vertical Federated Learning (LVFL), which aims to enhance both computational and communication efficiency in the vertical federated learning (VFL) setting.
Key highlights:
VFL involves clients with different feature spaces but a common sample space, which introduces unique challenges compared to horizontal federated learning (HFL).
LVFL employs separate lightweighting strategies for the feature model (to improve computational efficiency) and the feature embedding (to enhance communication efficiency).
The paper establishes a convergence bound for the LVFL algorithm that accounts for both communication and computational lightweighting ratios.
Experiments on the CIFAR-10 dataset demonstrate that LVFL can significantly reduce computational and communication demands while preserving robust learning performance.
The paper first provides an overview of the VFL system model and formulates the learning objective. It then introduces the LVFL algorithm, which dynamically adjusts the computational and communication lightweighting ratios for each client. The convergence analysis of LVFL is presented, deriving bounds that relate the lightweighting errors and ratios. Finally, the experimental results validate the effectiveness of LVFL in balancing efficiency and performance.