Evaluating Parameter-Efficient Fine-Tuning Methods for Improving Low-Resource Language Translation
Parameter-efficient fine-tuning (PEFT) methods can effectively adapt large pre-trained language models for diverse tasks, offering a balance between adaptability and computational efficiency. This study comprehensively evaluates the performance of various PEFT architectures for improving low-resource language (LRL) neural machine translation (NMT).