Core Concepts
SuperLoRA offers a parameter-efficient framework for fine-tuning large models, unifying and extending LoRA variants with high flexibility.
Abstract
SuperLoRA introduces a generalized framework that unifies and extends various LoRA variants, offering superior performance in transfer learning tasks. It provides flexibility through grouping, folding, shuffling, projecting, and tensor factoring. The proposed framework demonstrates high parameter efficiency for large models in transfer learning tasks. By reshaping tensors and introducing projection layers, SuperLoRA achieves significant reductions in the number of trainable parameters while maintaining performance quality. Experimental results show competitive performance across different parameter regimes.
Stats
LoRA approximates weight updates by low-rank matrices.
SuperLoRA offers flexibility with grouping and tensor decomposition.
LoNKr extends LoKr with multiple splits.
LoRTA folds matrices into high-order tensors.
Quotes
"Most recent work decomposes each convolution kernel into a learnable filter atom and its non-learnable counterparts."
"SuperLoRA provides more flexibility and extended functionality controlled by a set of hyperparameters."
"Projection layers in SuperLoRA improve parameter efficiency significantly."