핵심 개념
Pre-trained language models effectively generalize to code-switched text, revealing insights into their capabilities.
통계
"Our findings reveal that pre-trained language models are effective in generalising to code-switched text."
"Our results indicate that PLMs are able to distinguish between CS text at a sentence level and token level in our detection experiment."
인용구
"Despite its widespread use online and recent research trends in this area, research in code-switching presents unique challenges."
"Our findings seem to indicate that PLMs are surprisingly good at generalising across CS text."