The content presents a novel privacy-preserving federated learning (PPFL) framework that leverages random coding and system immersion tools from control theory. The key idea is to treat the optimization algorithms used in standard federated learning (FL) schemes as dynamical systems and immerse them into a higher-dimensional "target" optimization algorithm.
The framework consists of the following steps:
Server Encoding: The server encodes the global model parameters using an affine immersion map π1(·) before broadcasting to clients. This creates a higher-dimensional representation of the model parameters.
Client Local Training: Clients update their local models using a target optimization algorithm designed to work on the encoded parameters. This target algorithm is designed to converge to an encoded version of the true local model parameters.
Aggregation: Clients send their encoded local model updates to a third-party aggregator, who aggregates them and sends the encoded aggregated model to the server.
Server Decoding: The server decodes the aggregated model using the left inverse of the encoding map π1(·) to retrieve the original aggregated model.
An additional encoding step by the aggregator using π2(·) is introduced to further protect the privacy of the intermediate global models from the server.
The proposed framework is shown to provide any desired level of differential privacy guarantee for both local and global models without compromising the accuracy and convergence rate of the federated learning algorithm. It also maintains computational efficiency compared to other privacy-preserving approaches like differential privacy, secure multi-party computation, and homomorphic encryption.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Haleh Hayati... at arxiv.org 09-27-2024
https://arxiv.org/pdf/2409.17201.pdfDeeper Inquiries