Efficient Fine-Tuning of Large Pre-Trained Models in Federated Learning via Low-Rank, Task-Specific Adapter Clustering
The proposed FL-TAC algorithm enables efficient fine-tuning of large pre-trained models in federated learning by training low-rank, task-specific adapters on client devices and performing clustering-based aggregation on the server to facilitate knowledge exchange across tasks.