Основные понятия
The authors propose a novel parameter-efficient quasi-orthogonal fine-tuning method (qGOFT) that enhances the adaptation capability of pretrained language models to downstream tasks while maintaining parameter efficiency.
Аннотация
The content discusses the challenges of adapting large-scale pretrained language models (PLMs) to diverse downstream tasks, and proposes two key innovations to address these challenges:
- Enhancing Parameter Efficiency with Equivalent Expressiveness:
- The authors design a Givens-based Orthogonal Fine-Tuning (GOFT) method that reduces the parameter complexity from quadratic (O(d^2)) to linear (O(d)) while maintaining the expressive power equivalent to Orthogonal Fine-Tuning (OFT) in the special orthogonal group SO(d).
- To further improve computational efficiency, the authors introduce a novel parallel rotation strategy that reduces the number of sparse matrix multiplications from O(d) to O(log d).
- Enhancing Adaptation Capability:
- Based on GOFT, the authors propose quasi-Givens OFT (qGOFT), which enables adjustable vector norms and slightly tunable angular measurements under soft orthogonality constraints.
- This improves the adaptation capability to the semantic shift underlying downstream tasks and various domains.
Extensive experiments on various NLP and vision tasks demonstrate the effectiveness of the proposed methods, achieving outstanding performances under low parameter budgets.
Статистика
The content does not contain any key metrics or important figures to support the author's key logics.
Цитаты
The content does not contain any striking quotes supporting the author's key logics.