Core Concepts
Calibrating human self-confidence enhances human-AI team performance and encourages rational reliance on AI.
Abstract
The study explores the relationship between human self-confidence appropriateness and reliance appropriateness in AI-assisted decision-making. Three calibration mechanisms are proposed: Think the Opposite, Thinking in Bets, and Calibration Status Feedback. Results show that mismatches between human self-confidence and correctness lead to increased incorrect reliance. Calibrating self-confidence can potentially reduce incorrect reliance. Showing AI confidence did not significantly impact task performance or reliance appropriateness.
Stats
ECE positively correlates with Under-Reliance (𝜌: 0.404, 𝑝<0.001) and Over-reliance (𝜌: 0.343, 𝑝<0.01).