מושגי ליבה
The author proposes the CoLR framework to reduce communication overhead in FedRec systems by leveraging low-rank structures, ensuring compatibility with secure aggregation protocols.
תקציר
The authors introduce the CoLR framework to address communication challenges in FedRec systems. By adjusting lightweight trainable parameters while keeping most parameters frozen, CoLR significantly reduces communication overhead without compromising performance. The method remains compatible with secure aggregation protocols, offering benefits such as reduced communication costs, low computational overheads, and adaptability to bandwidth heterogeneity.
In a centralized recommendation system, user privacy concerns have led to the emergence of federated recommendation (FedRec) systems. These systems involve transferring recommendation models between a central server and edge devices like mobile phones and laptops. Challenges include varying computational speeds and bandwidth capabilities among clients, leading to potential performance issues.
To reduce communication costs in practical FedRec systems, mechanisms like reducing communication frequency and message compression are commonly used. However, these methods may not align with secure aggregation protocols like Homomorphic Encryption (HE). The proposed CoLR framework addresses these challenges by enforcing a low-rank structure on item embedding matrix updates during training rounds.
Experimental results show that CoLR outperforms base models and compression methods like SVD and Top-K compression in terms of recommendation performance while significantly reducing communication costs. The method is also compatible with HE-based FedRec systems, offering privacy preservation benefits.
Overall, the CoLR framework presents a promising solution for efficient communication and secure federated recommendation systems by leveraging low-rank structures in model updates.
סטטיסטיקה
Our approach substantially reduces communication overheads without introducing additional computational burdens.
The approach resulted in a reduction of up to 93.75% in payload size.
Code for reproducing our experiments can be found at https://github.com/NNHieu/CoLR-FedRec.
In experiments on the MovieLens-1M dataset, we adjust dimensions of user and item embeddings across different settings.
We set the number of clients participating in each round equal to 1% of all users in each dataset.
ציטוטים
"Our approach substantially reduces communication overheads without introducing additional computational burdens."
"The method remains compatible with secure aggregation protocols."
"Experimental results show that CoLR outperforms base models and compression methods."