Efficient Fine-Tuning of Large Language Models: The Crucial Role of Layer Normalization
Layer normalization is a key component in parameter-efficient fine-tuning of large language models, such as BERT, and can achieve comparable or better performance to full fine-tuning with significantly fewer parameters.