Pre-trained language models effectively generalize to code-switched text, revealing insights into their capabilities.
Pre-trained language models effectively generalize to code-switched text, shedding light on their abilities to capture and represent linguistic information across languages.