The framing of uncertainty in the outcomes of AI-based decision aids can significantly impact users' longitudinal trust and willingness to rely on the system.
Allowing humans to interactively guide machine attention does not consistently improve the accuracy of human-AI teams in fine-grained image classification tasks.
Fair machine guidance aims to educate individuals on making unbiased decisions by leveraging fairness-aware machine learning to identify and address their biases.
This study presents a comprehensive mapping of the current research landscape on human interaction patterns with large language models, focusing on the perspectives of collaboration and creativity.
Humans can discern the relative accuracy of LLM-generated content, ranking it as genuine > minor hallucination > major hallucination. Warning labels reduce the perceived accuracy and increase dislike of hallucinated content, without significantly affecting genuine content.
Evolving Agents is a novel agent architecture that enables agents to exhibit differentiated behaviors corresponding to their diverse personalities and undergo continuous personality evolution based on external information during interactive simulation.
The core message of this article is to propose a process framework for understanding and managing trust in collaborative human-AI (HAI) teams. The framework, called CHAI-T, incorporates the context-specificity, team processes, and temporal dynamics that characterize trust development and maintenance in HAI teaming.
A novel human-LLM interaction framework, Low-code LLM, incorporates simple low-code visual programming interactions to achieve more controllable and stable responses from LLMs for complex tasks.
Cognitive biases, such as anthropomorphism and framing effect, can significantly impact human agreement with AI recommendations in hiring decisions, highlighting the need for tailored approaches to AI product design.
This paper presents a taxonomy that categorizes the various interaction modes between humans and large language models (LLMs), aiming to empower users to tackle complex tasks by utilizing LLMs beyond the default conversational prompting paradigm.