Parameter-Efficient Quasi-Orthogonal Fine-Tuning for Efficient Adaptation of Pretrained Language Models
The authors propose a novel parameter-efficient quasi-orthogonal fine-tuning method (qGOFT) that enhances the adaptation capability of pretrained language models to downstream tasks while maintaining parameter efficiency.