Low-Rank Adaptation (LoRA) can effectively approximate target models with a minimal rank, revolutionizing fine-tuning methods.
A single linear layer can produce task-adapted low-rank matrices efficiently.