Concepts de base
While large language models (LLMs) have made significant strides in various language tasks, their developmental trajectory does not mirror human language acquisition. LLMs' capabilities are more influenced by training data and architecture than by mimicking the stages of human language development.
Stats
The researchers evaluated 15 LLMs, including GPT-2, RoBERTa, ALBERT, T5, OPT, Llama2, Mistral, Llama3, and Gemma2.
The study utilized a three-stage framework based on human language acquisition, encompassing basic word understanding, complex grammar comprehension, and advanced logical reasoning.
Five linguistic dimensions were analyzed for generation ability: noun usage, average word length, clause complexity, lexical diversity (type-token ratio), and auxiliary verb usage.
Citations
"Although recent LMs outperform earlier models in overall performance, their developmental trajectory does not strictly follow the path of human language acquisition."
"Notably, in generation tasks, LMs are more similar to human performance in areas where information is easier to extract from the corpus, such as average word length, clauses, and auxiliary verbs."
"Register theory offers a plausible explanation for these observations, suggesting that the linguistic features of the training data have a substantial impact on the models’ abilities."