Improving Neural Machine Translation for Chat Conversations: Exploring Traditional NMT Models and Large Language Models
This paper explores various strategies to improve neural machine translation (NMT) performance for chat translation tasks, including fine-tuning models using chat data, Minimum Bayesian Risk (MBR) decoding, and self-training. The authors also investigate the potential of large language models (LLMs) for chat translation and discuss the challenges and future research directions in this domain.