Efficient Fine-tuning of Language Models in Federated Learning Using Weight Decomposition
The authors propose FeDeRA, a method that leverages Singular Value Decomposition to initialize the adapter modules in the LoRA technique, in order to improve the performance of parameter-efficient fine-tuning in federated learning settings with highly non-IID data.