Core Concepts
Large Language Models excel in formal linguistic competence but struggle with functional linguistic tasks due to a gap between the two.
Abstract
The article evaluates Large Language Models (LLMs) based on their formal and functional linguistic competence.
It discusses the conflation of language and thought, the Turing test, and common fallacies related to language processing.
LLMs are shown to excel in formal linguistic competence but face challenges in functional linguistic tasks.
The distinction between formal and functional competence is grounded in human neuroscience findings.
LLMs have limitations in areas such as world knowledge, situation modeling, social reasoning, and formal reasoning.
The article provides insights into how LLMs learn hierarchical structure, linguistic abstractions, syntactic constructions, and more.
Challenges faced by LLMs include reliance on statistical regularities, unrealistic amounts of training data, and limited performance on languages other than English.
Stats
"LLMs today can produce text that is difficult to distinguish from human output."
"Claims have emerged that LLMs are showing 'sparks of artificial general intelligence'."
"LLMs exhibit a gap between formal and functional competence skills."
Quotes
"Models that use language in humanlike ways would need to master both formal and functional competence types."
"LLMs exhibit knowledge of hierarchical structure and linguistic abstractions resembling human brain responses during language processing."
"LLMs possess substantial formal linguistic competence but face challenges with functional tasks."