Temel Kavramlar
This paper presents a comprehensive survey of the recent progress and emerging trends in multilingual large language models (MLLMs), offering a unified perspective through a novel taxonomy based on alignment strategies.
Özet
The paper provides a thorough review of the advancements in multilingual large language models (MLLMs). It introduces a novel taxonomy that categorizes MLLMs into two main alignment strategies: parameter-tuning alignment and parameter-frozen alignment.
Parameter-Tuning Alignment:
- Pretraining Alignment: Approaches that tune model parameters during the pretraining stage, including from-scratch pretraining and continual pretraining.
- Supervised Fine-Tuning (SFT) Alignment: Methods that leverage multilingual task data with instruction format to fine-tune model parameters.
- Reinforcement Learning from Human Feedback (RLHF) Alignment: Techniques that integrate multilingual RLHF data to train more effective reward models.
- Downstream Finetuning Alignment: Strategies that fine-tune model parameters on downstream tasks, including full-parameter and parameter-efficient approaches.
Parameter-Frozen Alignment:
- Direct Prompting: Directly outputting requests without additional instructions for implicit alignment.
- Code-Switching Prompting: Integrating multilingual words into a single-language utterance to elicit alignment.
- Translation Alignment Prompting: Translating the query into other languages for better cross-lingual alignment.
- Retrieval Augmented Alignment: Incorporating external retrieval to inject more knowledge during prompting.
The paper also highlights several emerging frontiers and challenges in the MLLM field, including hallucination, knowledge editing, safety, fairness, language extension, and multi-modality extension.
İstatistikler
There are over 7,000 languages in the world, and the success of large language models should consider serving diverse countries and languages.
Multilingual pretraining data includes manually created corpora, web-crawled data, and benchmark adaptations.
Multilingual SFT data includes manually created datasets, machine-translated datasets, benchmark adaptations, and MLLM-aided generation.
Multilingual RLHF data is used to train more effective reward models in multilingual contexts.
Alıntılar
"Multilingual Large Language Models are capable of using powerful Large Language Models to handle and respond to queries in multiple languages, which achieves remarkable success in multilingual natural language processing tasks."
"To this end, in this paper, we present a thorough review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature."