Główne pojęcia
The author explores patterns of motivation for trusting AI systems, presenting four rationales that provide insights into human trust. These rationales are relevant for developers and scholars working on trust measures in technological systems.
Streszczenie
The content delves into the motivations behind trusting AI systems, highlighting different rationales such as human favoritism, the black box rationale, OPSEC rationale, and the 'wicked world, tame computers' rationale. These insights are crucial for understanding human behavior towards technology.
Companies worldwide are eager to implement AI technology despite warnings of biases and risks. A survey with over 450 respondents from 30 countries revealed four main reasons for trusting AI: human favoritism, concerns about black box algorithms, operational security (OPSEC), and recognition that some problems are too complex for algorithmic solutions. The study emphasizes the importance of understanding these motivations for developers and designers of AI systems.
AI promises automation and efficiency but raises concerns about bias and privacy issues. The study analyzes responses from a diverse group to understand why people trust or distrust AI systems. Four key rationales emerge: human favoritism, skepticism towards black box algorithms, concerns about operational security (OPSEC), and recognition of the limitations of algorithmic solutions in complex real-world problems.
Statystyki
"More than 450 respondents from more than 30 different countries"
"Almost 3000 open text answers"
"Survey conducted during November and December 2023"
Cytaty
"I wouldn’t trust anything driverless period."
"Lower stakes - only deals with hobbies/past times as opposed to finances"
"The first is pure totalitarian shit: easily converted into Big Brother."