Simulating opinion dynamics using Large Language Models (LLMs) reveals their inherent bias towards factual information, impacting their ability to simulate individuals with fact-resistant beliefs. The introduction of confirmation bias results in opinion fragmentation, highlighting both the promise and limitations of LLM agents in understanding human-like opinion dynamics.
Accurate models of human opinion dynamics are crucial for various societal phenomena. Agent-Based Models (ABMs) have been traditionally used but oversimplify human behavior. This study proposes using LLMs to simulate opinion dynamics more realistically by focusing on communicative interactions among small social groups. The findings suggest that while LLM agents tend to converge towards accurate information, inducing confirmation bias can lead to opinion fragmentation similar to existing ABM research.
The study explores the impact of cognitive biases on group-level opinion dynamics simulated by LLM agents. Results show that stronger confirmation bias leads to greater diversity in opinions among agents, replicating findings from traditional ABMs. Additionally, the study investigates the effects of initial opinion distributions on agent convergence towards ground truth consensus across different topics.
翻译成其他语言
从原文生成
arxiv.org
更深入的查询