toplogo
Giriş Yap

Framing Uncertainty in AI Decision Aids: Longitudinal Impact on User Trust - A Gig Driver Case Study


Temel Kavramlar
The framing of uncertainty in the outcomes of AI-based decision aids can significantly impact users' longitudinal trust and willingness to rely on the system.
Özet
The study investigates how the framing of uncertainty in outcomes impacts users' longitudinal trust in and reliance on an AI-based schedule recommendation tool for gig drivers. Key findings: Users who perceived the outcomes as meeting or exceeding the tool's estimates had greater performance-based trust than those who perceived the outcomes as falling below estimates. Users with the same level of trust prior to interaction exhibited higher reliance if they perceived the outcomes as meeting or exceeding estimates. Designs that presented estimates with greater granularity but not error visibility increased users' trust and reliance. Designs that showed ranged estimates decreased users' trust and reliance through increasing error visibility. Interviews revealed diverse user experiences, suggesting AI systems must go beyond general designs to calibrate expectations of individual users. The study contributes an in situ case study on the impact of uncertainty framing on longitudinal human-AI trust, with implications for the design of AI decision aids.
İstatistikler
On average, a driver following the recommended schedule will earn $X. On a bad week, a driver following the recommended schedule will earn $Y. On a good week, a driver following the recommended schedule will earn $Z.
Alıntılar
"On average, based on historical data, a driver following this schedule will earn..." "It is estimated that you will earn..."

Önemli Bilgiler Şuradan Elde Edildi

by Rex Chen,Rui... : arxiv.org 04-10-2024

https://arxiv.org/pdf/2404.06432.pdf
Missing Pieces

Daha Derin Sorular

How might the tool's design be further improved to better align with the diverse needs and experiences of gig drivers?

To better align the tool's design with the diverse needs and experiences of gig drivers, several improvements can be considered: Customization Options: Provide more flexibility for drivers to input their constraints and preferences. This could include options to set specific earning goals, limit the scope of historical data used for estimates, and choose platforms for data inclusion. Real-Time Data Integration: Incorporate real-time data on demand and supply to provide more accurate and dynamic estimates. This would help drivers make more informed decisions based on current conditions. Feedback Mechanism: Implement a feature that allows drivers to compare the tool's estimates with their actual earnings. This feedback loop can help drivers understand the tool's accuracy and build trust over time. Mobile-Friendly Interface: Ensure that the tool is optimized for mobile devices, as many gig drivers rely on their smartphones for work. A user-friendly mobile interface can enhance accessibility and usability. Gamification Elements: Introduce gamification elements to make the tool more engaging and motivating for drivers. For example, incorporating challenges, rewards, or leaderboards based on performance can increase user engagement.

What are the potential downsides of increasing the granularity and transparency of uncertainty in AI decision aids?

While increasing the granularity and transparency of uncertainty in AI decision aids can have several benefits, there are also potential downsides to consider: Information Overload: Providing too much detailed information on uncertainty may overwhelm users, leading to decision paralysis or confusion. Users may struggle to interpret complex data and make informed choices. Misinterpretation: Increased transparency can sometimes lead to misinterpretation of data. Users may focus on specific details without considering the broader context, leading to biased decision-making. Loss of Trust: Exposing too much uncertainty may erode trust in the AI system. If users perceive the system as unreliable or overly complex, they may be less likely to rely on its recommendations. Increased Cognitive Load: Dealing with highly granular and transparent uncertainty requires additional cognitive effort from users. This can be taxing, especially for individuals with limited time or cognitive resources. Privacy Concerns: Detailed transparency may reveal sensitive information about the AI algorithms or data sources, raising privacy concerns among users. Protecting user data while providing transparency is a delicate balance to maintain.

How could the insights from this study on uncertainty framing be applied to improve trust in AI systems beyond the gig driving context?

The insights from this study on uncertainty framing can be applied to improve trust in AI systems across various domains. Some potential applications include: Healthcare: AI systems in healthcare can benefit from transparently communicating uncertainty in diagnoses or treatment recommendations. Providing patients with clear explanations of the AI's confidence levels can enhance trust and patient outcomes. Finance: In financial services, AI decision aids can build trust by offering detailed explanations of risk assessments and investment recommendations. Transparently presenting uncertainty levels can empower users to make informed financial decisions. Education: AI systems in education can improve trust by openly communicating the uncertainty in personalized learning recommendations. Students and educators can benefit from understanding the confidence levels and limitations of AI-driven suggestions. Customer Service: AI-powered customer service platforms can enhance trust by transparently disclosing the uncertainty in automated responses or recommendations. Clear communication about the AI's decision-making process can increase user confidence in the system. Legal and Compliance: AI systems used for legal research or compliance tasks can build trust by providing granular details on the uncertainty in case outcomes or regulatory interpretations. Transparently framing uncertainty can help legal professionals make well-informed decisions.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star