Core Concepts
The author proposes DA-Net to address challenges in multi-source cross-lingual transfer learning by introducing a Disentangled and Adaptive Network. The approach aims to purify input representations and align class-level distributions for improved model performance.
Abstract
DA-Net introduces innovative methods, including Feedback-guided Collaborative Disentanglement (FCD) and Class-aware Parallel Adaptation (CPA), to enhance multi-source cross-lingual transfer learning. Experimental results demonstrate the effectiveness of DA-Net in improving adaptation across languages and mitigating interference from multiple sources.
Key points:
Multi-source cross-lingual transfer learning aims to transfer knowledge from labeled source languages to an unlabeled target language.
Existing methods face challenges due to shared encoders containing information from different source languages.
DA-Net proposes FCD to purify input representations and CPA to align class-level distributions, improving model performance.
Experimental results on NER, RRC, and TEP tasks involving 38 languages validate the effectiveness of DA-Net.
Stats
81.33 83.99 82.83 85.57 56.93 38.69 49.45 60.93 66.47 72.82 36.35 78.81 68.84 83.13 81.55 68.51
80.41 82.26 83.99 83.49 55.00 46.54 51.35 63.34 64.04 70...
Quotes
"No annotation in the target language makes class-wise alignment challenging."
"DA-Net's FCD method helps purify input representations, reducing interference among sources."
"The CPA method bridges the language gap between source-target pairs for improved adaptation."