toplogo
Sign In

Dynamic Human Trust Modeling of Autonomous Agents with Varying Capability and Strategy


Core Concepts
The core message of this study is that the temporal ordering of an autonomous agent's capability and strategy affects how human trust in the agent evolves over time.
Abstract

This study explores the dynamic trust of human subjects in a human-autonomy-teaming screen-based task. Subjects were paired with autonomous agents that had varying capabilities (20%, 50%, or 100% of outliers detected) and used one of three search strategies (Lawnmower, Random, or Omniscient).

The key findings are:

  1. Subjects' self-reported trust in the autonomous agents was influenced by the temporal ordering of the agents' capability and strategy. Subjects who interacted with agents whose capability changed every trial (Group 0) showed more erratic trust trajectories compared to those who saw agents with a constant capability in each block (Group 1).

  2. Time series modeling, particularly ARIMAX models, better captured the dynamics of trust compared to linear regression. The ARIMAX models revealed the effects of the temporal ordering of agent performance on estimated trust, suggesting that recency bias may affect how subjects weigh the contribution of strategy or capability to trust.

  3. Cross-validation of the models between the two groups showed a moderate improvement in next-trial trust prediction, emphasizing the importance of considering the temporal aspects in understanding human-robot collaboration.

The study demonstrates the need to represent autonomous agent characteristics over time to accurately capture changes in human trust. Understanding the connections between agent behavior, agent performance, and human trust is crucial for improving human-robot collaborative tasks.

edit_icon

Customize Summary

edit_icon

Rewrite with AI

edit_icon

Generate Citations

translate_icon

Translate Source

visual_icon

Generate MindMap

visit_icon

Visit Source

Stats
The autonomous agent reported finding x outliers as a fraction of its capability (20%, 50%, or 100%). The subject reported finding y outliers with their own spotlight. The subject reported the total number of outliers hidden in the grid.
Quotes
"Trust is an emerging area of study in human-robot collaboration. Many studies have looked at the issue of robot performance as a sole predictor of human trust, but this could underestimate the complexity of the interaction." "A time series modeling approach reveals the effects of temporal ordering of agent performance on estimated trust. Recency bias may affect how subjects weigh the contribution of strategy or capability to trust." "Understanding the connections between agent behavior, agent performance, and human trust is crucial to improving human-robot collaborative tasks."

Deeper Inquiries

How might the inclusion of additional agent characteristics, such as the ability to communicate intent or adapt its strategy, influence the dynamics of human trust?

Incorporating additional agent characteristics, such as the ability to communicate intent or adapt its strategy, can significantly impact the dynamics of human trust in autonomous systems. Communication of intent can enhance transparency and predictability, allowing humans to better understand the agent's actions and intentions. This transparency can lead to increased trust as humans feel more informed and in control of the collaborative task. Moreover, clear communication can reduce uncertainty and ambiguity, which are common factors that influence trust in human-robot interactions. Adapting the agent's strategy based on feedback and performance can also positively influence human trust. An agent that can learn and adjust its behavior over time to align with human preferences and expectations can build trust through improved task performance and reliability. Adaptive strategies can demonstrate the agent's responsiveness to human needs and preferences, fostering a sense of partnership and collaboration. Overall, the inclusion of these additional agent characteristics can lead to a more dynamic and responsive interaction, enhancing the overall trust between humans and autonomous systems.

How could the insights from this study be applied to the design of autonomous systems that aim to build and maintain human trust over longer-term interactions?

The insights from this study offer valuable guidance for designing autonomous systems that aim to build and maintain human trust over longer-term interactions. By understanding the impact of agent characteristics, such as capability and strategy, on human trust dynamics, designers can tailor the behavior and communication strategies of autonomous systems to optimize trust development. Here are some ways these insights can be applied: Adaptive Behavior: Design autonomous systems that can adapt their behavior based on human feedback and performance evaluations. By incorporating mechanisms for learning and adjustment, the system can continuously improve its performance and reliability, thereby enhancing human trust over time. Transparent Communication: Implement clear and transparent communication mechanisms that allow the system to convey its intent, decisions, and reasoning to the human collaborator. Providing explanations for actions and decisions can increase trust by reducing uncertainty and promoting understanding. Behavioral Feedback: Integrate mechanisms for collecting behavioral feedback on trust, such as observing human reactions, responses, and performance during interactions. This feedback can be used to adjust the system's behavior and strategy to better align with human expectations and preferences. Long-Term Trust Building: Focus on building trust gradually over longer-term interactions by consistently demonstrating reliability, consistency, and responsiveness. By maintaining a positive track record and adapting to changing circumstances, autonomous systems can foster a strong foundation of trust with human users. Incorporating these insights into the design process can help create autonomous systems that are not only efficient and effective but also capable of establishing and sustaining trust in human-robot collaborations over extended periods.

What are the potential limitations of using self-reported trust as a measure, and how could behavioral measures of trust be incorporated to provide a more comprehensive understanding?

Self-reported trust, while valuable, has certain limitations as a measure of trust in human-robot interactions. Some potential limitations include: Subjectivity: Self-reported trust is inherently subjective and may be influenced by individual biases, perceptions, and experiences. Different individuals may interpret and respond to trust-related questions differently, leading to variability in the data. Social Desirability Bias: Participants may provide responses that they believe are socially acceptable or expected, rather than reflecting their true feelings of trust. This can lead to inaccuracies in the reported trust levels. Memory Recall: Participants may have difficulty accurately recalling and evaluating their trust experiences over time, especially in complex and dynamic interactions with autonomous systems. To complement self-reported measures, incorporating behavioral measures of trust can provide a more comprehensive understanding of human-robot trust dynamics. Behavioral measures involve observing and analyzing actual interactions, responses, and performance metrics during collaborative tasks. Some ways to incorporate behavioral measures include: Observational Studies: Conducting observational studies to track human behaviors, reactions, and decision-making processes during interactions with autonomous systems. This can provide valuable insights into trust-related behaviors in real-time. Performance Metrics: Analyzing performance metrics, such as task completion time, accuracy, and efficiency, to assess the impact of trust on task outcomes. Changes in performance indicators can reflect shifts in trust levels. Physiological Responses: Monitoring physiological responses, such as heart rate variability or skin conductance, to capture emotional and physiological reactions associated with trust. These measures can offer objective insights into trust dynamics. By combining self-reported trust measures with behavioral measures, researchers and designers can gain a more holistic understanding of human-robot trust, capturing both subjective perceptions and objective behaviors to inform the development of more effective autonomous systems.
0
star