核心概念
QuanCrypt-FL leverages quantization, pruning, and homomorphic encryption to enable secure and efficient federated learning, mitigating inference attacks like gradient inversion while minimizing communication and computational overhead.
统计
QuanCrypt-FL achieves up to 9x faster encryption, 16x faster decryption, and 1.5x faster inference compared to BatchCrypt.
QuanCrypt-FL reduces training time by up to 3x compared to BatchCrypt.
The study utilized a polynomial modulus degree of 16384 and coefficient modulus sizes [60, 40, 40, 40, 60] for the CKKS homomorphic encryption scheme.
A clipping factor (𝛼) of 3.0 was used to manage extreme values in model updates.
Pruning started at round 40 (𝑡eff) with an initial pruning rate (𝑝0) of 20%, reaching a target pruning rate (𝑝target) of 50% by round 300 (𝑡target).