核心概念
Pre-trained language models effectively generalize to code-switched text, revealing insights into their capabilities.
統計
"Our findings reveal that pre-trained language models are effective in generalising to code-switched text."
"Our results indicate that PLMs are able to distinguish between CS text at a sentence level and token level in our detection experiment."
引用
"Despite its widespread use online and recent research trends in this area, research in code-switching presents unique challenges."
"Our findings seem to indicate that PLMs are surprisingly good at generalising across CS text."