Simulating opinion dynamics using Large Language Models (LLMs) reveals their inherent bias towards factual information, impacting their ability to simulate individuals with fact-resistant beliefs. The introduction of confirmation bias results in opinion fragmentation, highlighting both the promise and limitations of LLM agents in understanding human-like opinion dynamics.
Accurate models of human opinion dynamics are crucial for various societal phenomena. Agent-Based Models (ABMs) have been traditionally used but oversimplify human behavior. This study proposes using LLMs to simulate opinion dynamics more realistically by focusing on communicative interactions among small social groups. The findings suggest that while LLM agents tend to converge towards accurate information, inducing confirmation bias can lead to opinion fragmentation similar to existing ABM research.
The study explores the impact of cognitive biases on group-level opinion dynamics simulated by LLM agents. Results show that stronger confirmation bias leads to greater diversity in opinions among agents, replicating findings from traditional ABMs. Additionally, the study investigates the effects of initial opinion distributions on agent convergence towards ground truth consensus across different topics.
Başka Bir Dile
kaynak içeriğinden
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Yun-Shiuan C... : arxiv.org 03-13-2024
https://arxiv.org/pdf/2311.09618.pdfDaha Derin Sorular