Efficient Fine-Tuning of Large Language Models for Multilingual Machine Translation with Minimal High-Quality Data
Large language models can be effectively fine-tuned for multilingual machine translation using as little as 32 high-quality parallel training instances, with performance comparable to models trained on orders of magnitude more data. The choice of translation direction and data quality are critical factors in achieving successful alignment.