核心概念
Automated warning labels affect post engagement differently based on the reasons provided, with partisanship influencing user behavior.
要約
The study explores the impact of automated misinformation warning labels on post engagement intents, focusing on likes, comments, and shares. It investigates how different types of labels derived from algorithmic misinformation detection literature affect engagement patterns. The study also considers the influence of political congruence on post engagement behaviors. Findings suggest that the presence of warning labels suppresses commenting and sharing intents but not liking posts. Different reasons for labels have varying effects on engagement, with partisanship playing a significant role in shaping user behavior.
Abstract:
- Investigates effects of automated warning labels on post engagement.
- Considers partisanship influence on engagement intents.
Introduction:
- Misinformation prevalence in social media platforms.
- Countermeasures like fact-checking professionals and automated labeling.
Related Work:
- Studies on misinformation labeling interventions and social media engagement.
Method:
- Two-phases within-subjects experiment with 200 participants.
- Analyzed effects of automated warning labels and reasons provided for them.
Results:
Phase I:
- Generic label suppressed intent to comment but not to like or share posts.
Phase II:
- Different label types influenced intent to share posts differently.
- Partisanship affected intent to comment and share politically congruent posts more.
Discussion:
- Partisanship influences post engagement behaviors significantly.
- Label descriptions impact user behavior despite a common fact-checking algorithm.
Conclusion:
- Automated warning labels have varied effects on post engagement intents.
- Considerations for designing effective automated misinformation interventions are highlighted.
統計
With fact-checking by professionals being difficult to scale on social media, algorithmic techniques have been considered. However, it is uncertain how the public may react to labels by automated fact-checkers. In this study, we investigate the use of automated warning labels derived from misinformation detection literature and investigate their effects on three forms of post engagement. Focusing on political posts, we also consider how partisanship affects engagement. In a two-phases within-subjects experiment with 200 participants, we found that the generic warnings suppressed intents to comment on and share posts, but not on the intent to like them. Furthermore, when different reasons for the labels were provided, their effects on post engagement were inconsistent, suggesting that the reasons could have undesirably motivated engagement instead. Partisanship effects were observed across the labels with higher engagement for politically congruent posts.
In a study comparing warnings from different sources [32], the machine-learning warning which disputed the post was found to perform worse than the fact-checking warning by humans. But when the machine-learning warning was extended to include a graph that displayed factors attributing to the algorithm’s decision, participants were observed to discern between fake and true news most accurately. Another study had corresponding observations where both algorithmic and third-party fact-checker labels were found to reduce participants’ perceived accuracy and believability of fake posts irrespective of the post’s political ideology [17].
Our study differs and extends these studies in two ways. First, we investigate a different set of warning labels that are derived from main categories of algorithmic misinformation detection in literature. Second, we investigate effects of three common forms of post engagement including intents to comment on and like posts apart from just sharing them. In doing so our study offers a closer look at how engagement patterns vary across different forms of engagements and how reasons for labels can affect them differently.
As political misinformation has been concern in recent years [16], we look at political posts thereby also consider partisanship effects [1]on post engagements.Our work supports literature that automated misinformation labeling is viable measure for scaling fact checking but careful consideration must be put in both designofthe label as well as ensuring users have basic understanding underlyingalgorithm so as not tomisinterpret or be misled bywarninglabels.Inthispaperwereportthefindingsofourstudythatinvolvesatwophaseswithinsubjectsexperimentwith200participants.WelookateffectsofautomatedmisinformationwarninglabelsonintentsolikecommentonandsharepostsandalsoconsiderhowtheseeffectschangedwhendifferentreasonswereprovidedforthelabelsWithpartisanshiabeingakeyfactorofengagingwithpoliticalpostswealsoconsiderhowtheeffectsofthelabelswereaffectedbythepoliticalcongruenceofthepoststotheparticipants.Intheremainderofthepaperwediscusstheliteraturegroundingourworktheresultsofthetwoexperimentphasesandtheirimplicationscontributingtoliteratureinformingtheuseofautomatingfactcheckingforlabelingcontentonsocialmedia
引用
"Many studies have looked into effectiveness of these labels often focusing how they affect perceived accuracy posts intent share them While countermeasure has been found generally effective it is difficult scale other methods such crowdsourced algorithmic factchecking have been proposed" - Gionnieve Lim
"Our work supports literature that automated misinformation labeling is viable measure scaling fact checking but careful consideration must be put both design label well ensuring users basic understanding underlyingalgorithm so misinterpret misled bywarninglabels" - Simon T Perrault
"In this paper report findings our study involves twophaseswithinsubjectsexperimentwith200participants We look at effectsofautomatedmisinformationwarninglabelsonintentstolikecommentonandsharepostsandalsoconsiderhowtheseeffectschangedwhendifferentreasonswereprovidedforthelabels Withpartisanshiabeingakeyfactorofengagingwithpoliticalpostswealsoconsiderhowtheeffectsofthelabelswereaffectedbythepoliticalcongruenceofthepoststotheparticipants" - Gionnieve Lim