The study investigates user strategies and behaviors when interacting with GPT-powered simulated robot agents in a VR setting. Participants engage in task-oriented dialogue, adapting their communication styles to correct agent misconceptions and navigate challenges.
The study reveals insights into how users perceive interactions with LLM-based agents, emphasizing the importance of establishing a shared world model between users and agents. Participants predominantly adopt an instruction-based dialog approach, treating agents as recipients of commands while occasionally engaging in conversation-like language.
Participants demonstrate adaptive communication strategies to address conflicts in perception between users and agents, showcasing a nuanced understanding of the virtual environment. The study sheds light on the dynamics of human-robot interaction mediated by LLMs, offering valuable insights for future research in robotics and AI.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문