Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization
A novel multi-codebook product quantization compression method for secure and communication-efficient federated learning, which combines local public data and client updates to generate robust codebooks.