Weak crossover effects differ in nature between matrix and relative clauses in Japanese. While matrix weak crossover configurations are consistently rejected, relative weak crossover configurations are frequently accepted, suggesting that the relevant distinction is structural and not based solely on linear precedence.
Small language models trained on character-level inputs can capture linguistic structures at various levels, including syntax, lexicon, and phonetics, performing comparably to or even outperforming larger subword-based models.
This paper introduces a user-friendly web interface and a Python library to facilitate easy access and manipulation of the extensive linguistic information in the Sejong dictionary, with a focus on Korean verb subcategorization frames.
Emotions are fundamentally linked to coping strategies that people use to deal with salient situations. This study introduces a novel corpus, COPING, to investigate how these coping strategies (attack, contact, distance, reject) are expressed in language and can be computationally identified.
Existing LLM evaluators suffer from bias towards superficial quality, overlooking instruction following ability. This work proposes systematic methods to mitigate the bias, including online calibration and offline contrastive training, effectively improving the fairness of LLM evaluation.
Large language models (LLMs) can generate human-like text, but the extent to which they truly replicate human language use patterns remains unclear. This study introduces a comprehensive psycholinguistic benchmark to systematically evaluate the humanlikeness of 20 prominent LLMs across various linguistic levels.
언어 현장 조사 과정에서 신경망 모델을 활용하여 데이터 수집의 효율성을 높일 수 있다.
Neural models can guide linguists during fieldwork by optimizing the data collection process and accounting for the dynamics of linguist-speaker interactions.
Language models exhibit structural priming effects that can be explained by inverse frequency effects, such as prime surprisal and verb preference, as well as lexical dependence between prime and target.
Online fan communities, such as fanfiction and forums, collaboratively reconstruct and renegotiate narrative elements like characters, leading to divergent representations from the original source material.