Kernkonzepte
Improving translation accuracy through instruction tuning in Large Language Models (LLMs).
Statistiken
Experiments on IWSLT and WMT benchmarks show reductions in off-target translation ratio by -53.3% and improvements in BLEURT by +16.4.
Off-target ratio reaches 99.5% for De→Fr direction.
Zitate
"Our method could effectively reduce the off-target translation ratio (averagely -53.3%), thus improving translation quality with average +5.7 SacreBLEU and +16.4 BLEURT."
"When tackling zero-shot directions, LLM heavily encounters the off-target problem, for example, in De→Fr, the off-target ratio reaches 99.5%."