toplogo
登入

Analyzing Structural Priming Predictions with a Cognitively Motivated Parser


核心概念
The author proposes a framework using a cognitively motivated parser to generate priming predictions from theoretical syntax and evaluate them with empirical human behavior, focusing on reduced relative clause representations in English.
摘要

Structural priming is a psycholinguistic paradigm used to study human sentence representations. The study introduces SPAWN, a parser that generates quantitative priming predictions from theoretical syntax. By comparing different theories, the research highlights how SPAWN can adjudicate between competing assumptions about sentence structures. The study focuses on reduced relative clauses in English and demonstrates how parsing decisions are made based on cognitive principles proposed by ACT-R.

The research explores the differences between two syntactic theories, Whiz-Deletion and Participial-Phase, in predicting priming effects for different types of relative clauses. Through experiments with human participants and computational models, the study evaluates the accuracy of these predictions. The results suggest that the Participial-Phase theory better captures human sentence representations compared to the Whiz-Deletion theory.

Overall, the study provides insights into how parsing mechanisms can influence structural priming predictions and sheds light on the cognitive processes involved in language comprehension.

edit_icon

客製化摘要

edit_icon

使用 AI 重寫

edit_icon

產生引用格式

translate_icon

翻譯原文

visual_icon

產生心智圖

visit_icon

前往原文

統計資料
Participants recruited: 769 US-based participants Compensation: 8.35 USD Training data sentences: 100 or 500 sentences used for training SPAWN instances
引述
"We propose a framework to generate priming predictions from syntactic theory." "SPAWN makes it possible to evaluate theoretical differences in sentence processing predictions."

從以下內容提煉的關鍵洞見

by Grusha Prasa... arxiv.org 03-13-2024

https://arxiv.org/pdf/2403.07202.pdf
SPAWNing Structural Priming Predictions from a Cognitively Motivated  Parser

深入探究

How do parsing mechanisms impact structural priming predictions?

The parsing mechanism plays a crucial role in determining the structural priming predictions generated by models like SPAWN. In the context provided, two different theories of reduced relative clauses were evaluated using SPAWN, which employs an inhibition-based backtracking mechanism for reanalysis during parsing. The differences in how these theories are implemented in the parser led to varying predictions about human behavioral responses. Specifically, the way SPAWN integrates syntactic categories retrieved during parsing and handles reanalysis based on uncertainty or going back to the start significantly impacts its ability to predict structural priming effects. For example, when comparing Whiz-Deletion and Participial-Phase accounts of relative clauses, it was observed that only one theory aligned with empirical data depending on how reanalysis was conducted within SPAWN. Therefore, the specific algorithms and mechanisms used by parsers like SPAWN influence their ability to generate accurate predictions about how humans process sentences structurally and exhibit priming effects.

What are the implications of weak prior knowledge on behavioral predictions?

Weak prior knowledge has significant implications for behavioral predictions made by models like SPAWN. In psycholinguistic experiments involving sentence processing and structural priming paradigms, participants often enter with limited pre-existing information or biases about syntactic structures. This aligns with assumptions from previous studies indicating that individuals have weak priors when engaging in such tasks. In this context, when evaluating competing theoretical accounts using SPAWN with varying levels of training data (0 sentences vs. 100 or 500 sentences), it was found that untrained instances better captured human behavior compared to those with more training examples. As prior knowledge increased, model predictions deviated from empirical data due to stronger biases introduced through additional training. These findings suggest that weak prior knowledge is essential for generating accurate behavioral predictions because it reflects a closer approximation of how individuals approach language processing tasks without strong preconceived notions.

How can eye-tracking data be used to validate fine-grained predictions generated by SPAWN?

Eye-tracking data can serve as a valuable tool for validating fine-grained predictions generated by models like SPAWN in psycholinguistic research contexts. By tracking participants' eye movements while they engage in sentence processing tasks or reading comprehension exercises, researchers can gather detailed insights into cognitive processes underlying language understanding at a granular level. When integrating eye-tracking data into validation procedures for computational models such as SPAWN, researchers can compare predicted patterns of gaze fixation durations or saccades against actual eye movement behaviors exhibited by participants during linguistic tasks. This comparison allows for a direct assessment of whether the model's fine-grained predictions align with real-time cognitive processes observed through eye movements. Overall, leveraging eye-tracking technology provides an objective method for confirming whether computational models accurately capture intricate details of sentence processing dynamics at various stages such as retrieval-integration-reanalysis-null prediction cycles modeled by parsers like SPAWN.
0
star