Improving Cross-Lingual Generalization of Adapter-Based Language Models with Scheduled Unfreezing
Scheduled unfreezing methods, such as Gradual Unfreezing (GU) and Linear Probing then Fine-Tuning (LPFT), can improve the cross-lingual generalization performance of adapter-based language models, even in a catastrophic forgetting-free setting.