The paper introduces a novel approach called In-Context Cross-Lingual Transfer (IC-XLT) for efficient One-Shot Cross-Lingual Transfer in text classification tasks. The key idea is to train a multilingual encoder-decoder model (mT5) using In-Context Tuning (ICT) on the source language (English) to learn both the classification task and the ability to adapt to new tasks through in-context demonstrations.
During inference, the model is adapted to a target language by prepending a One-Shot demonstration in that language to the input, without any gradient updates. This allows the model to effectively leverage the target-language examples to improve its cross-lingual transfer performance.
The authors evaluate IC-XLT on two multilingual text classification datasets, Aspect Category Detection (ACD) and Domain Classification (MASSIVE), across multiple target languages. The results show that IC-XLT consistently outperforms standard Zero-Shot and Few-Shot Cross-Lingual Transfer approaches, achieving significant performance gains with only a One-Shot demonstration in the target language.
Furthermore, the authors investigate the impact of limited source-language data on the performance of IC-XLT. They find that IC-XLT maintains its advantage over the baselines even when the source-language data is highly constrained, demonstrating its robustness and efficiency in resource-limited scenarios.
The authors also analyze the correlation between the improvements observed in target languages and their representation in the pretraining corpus of the mT5 model, finding that languages with lower representation tend to benefit more from the target-language adaptation through IC-XLT.
To Another Language
from source content
arxiv.org
Deeper Inquiries