Understanding Opinion Dynamics with LLM-based Agents
Large Language Models (LLMs) exhibit a bias towards factual information, limiting their effectiveness in simulating individuals with fact-resistant beliefs like climate change denial. Introducing confirmation bias leads to opinion fragmentation, showcasing the potential and limitations of LLM agents in understanding opinion dynamics.