Core Concepts
Large Language Models (LLMs) success is attributed to a model integrating human decision-making theories from philosophy, sociology, and computer science.
Abstract
The content delves into the integration of three established human decision-making theories into a model explaining the success of Large Language Models (LLMs). It starts by discussing reasoning in AI research, then transitions to reactive systems and sociological explanations. The paper emphasizes that Neural Net architectures are not experts on the problems they solve. It explores the historical classification of researchers as "neats" or "scruffies," highlighting different approaches to AI development. The narrative shifts towards understanding conversational user interfaces and mind reading phenomena in Natural Language Understanding. It concludes by proposing a pragmatic model for Natural Language Understanding based on shared practices rather than traditional symbolic representations.
AI Research Overview:
Integration of human decision-making theories into LLMs.
Historical perspective on AI research approaches.
Focus on mind reading phenomena in NLU.
Human Decision-Making Theories:
Philosophy, sociology, and computer science integration.
Transition from traditional symbolic representations to shared practices.
Proposal for a pragmatic model for NLU.
Mind Reading Phenomena:
Exploration of apparent mind reading in conversational interactions.
Shift towards shared practices over symbolic representations.
Emphasis on understanding practices in language use.
Stats
arXiv:2402.08403v2 [cs.CL] 21 Mar 2024
Quotes
"The feeling in the late 1980s was that choosing a system of symbols... was problematic."
"We humans do situated action using 'insect level' intelligence in a benign environment."