DLoRA enables scalable parameter-efficient fine-tuning of large language models by offloading computations to user devices, reducing privacy risks and improving efficiency.
PiSSA optimizes a significantly reduced parameter space while achieving or surpassing the performance of full-parameter fine-tuning by representing the pre-trained model matrix as the product of two trainable matrices initialized with principal singular values and vectors, plus a residual matrix.
Leverage Learning, a novel methodology, can significantly reduce the reliance on task-specific data while achieving performance comparable to fine-tuning on substantially larger task datasets. The minimalist implementation, Token-Efficient Leverage Learning (TELL), demonstrates marked improvement in performance per task token over traditional Supervised Fine-Tuning.