核心概念
By modeling human trust and engagement dynamics, an optimal assistance-seeking policy for robots can be developed to improve overall team performance in collaborative tasks.
摘要
Bibliographic Information:
Mangalindan, D. H., & Srivastava, V. Assistance-Seeking in Human-Supervised Autonomy: Role of Trust and Secondary Task Engagement (Extended Version). arXiv preprint arXiv:2405.20118v3 (2024).
Research Objective:
This research investigates how a robot's assistance-seeking behavior affects human trust and performance in a dual-task scenario, aiming to design an optimal assistance-seeking policy that maximizes team performance.
Methodology:
The researchers conducted human-subject experiments using a dual-task paradigm where participants supervised a robot collecting objects while simultaneously performing a target-tracking task. They collected data on human trust ratings, task performance, and robot actions. Using this data, they developed and estimated models for human trust dynamics, target-tracking engagement dynamics, and human action selection probability. Finally, they designed an optimal assistance-seeking policy using Model Predictive Control (MPC) based on the estimated models.
Key Findings:
- Human trust in the robot is influenced by the robot's performance, actions, and the complexity of the task.
- Human engagement in the secondary task is affected by the robot's actions and their own trust in the robot.
- The optimal assistance-seeking policy for the robot is context-dependent, considering both human trust and engagement levels.
- The MPC-based policy, which considers human trust and engagement, outperforms a greedy baseline policy that only considers task complexity.
Main Conclusions:
By modeling human trust and engagement dynamics, an optimal assistance-seeking policy for robots can be developed to improve overall team performance in collaborative tasks. The policy should adapt to different task complexities and human states, seeking assistance when human trust is low or engagement in the secondary task is compromised.
Significance:
This research contributes to the field of human-robot collaboration by providing insights into the factors influencing human trust and engagement during collaborative tasks. The proposed MPC-based assistance-seeking policy offers a practical approach to improve the efficiency and effectiveness of human-robot teams.
Limitations and Future Research:
The study was limited to a specific dual-task scenario. Future research could explore the generalizability of the findings and the policy to other collaborative tasks and environments. Additionally, investigating the impact of different robot communication strategies on human trust and engagement could further enhance the design of assistance-seeking policies.
統計資料
When the robot operated autonomously, it had a success probability of 0.75 in high-complexity tasks and 0.96 in low-complexity tasks.
In high-complexity tasks, the robot asked for human assistance with a probability of 0.3, while in low-complexity tasks, the probability was 0.1.
For the target-tracking task, participants achieved a mean performance of 89% at slow speeds and 82% at normal speeds.
The MPC policy resulted in 16 interruptions from the human participants, while the greedy policy led to 23 interruptions.
The median cumulative reward scores were 65.75 for the MPC policy and 57 for the greedy policy.
引述
"Autonomous systems are often underutilized due to the lack of trust, defeating the purpose and benefits of using automation."
"In contrast, an excessive reliance or trust in automation can lead to misuse or abuse of the system."
"Supervisors often juggle multiple tasks, managing their own responsibilities while overseeing others. This dynamic also applies to human supervisors overseeing autonomous agents."
"The human should only intervene with the robot when absolutely necessary to prevent compromising their own fruit collection. Similarly, the robot should be designed to operate with minimal interference to the human’s tasks."