Hate speakers on social media are exposed to a higher proportion of news from low-credibility sources compared to non-hate speakers, particularly for hate speech targeting Jews and Muslims. This association is driven by exposure to unpopular low-credibility posts from partisan sources aligned with the target population.
People unfollow social media accounts due to a variety of reasons, including ignorant or tacky content, lack of authenticity, excessive self-promotion, and shifts in the account's perspective or focus.
Toxic comments in online conversations can escalate and perpetuate toxicity, leading to more negativity and hostility on social media platforms.
Leveraging neural models to learn hidden representations of individual rumor-related tweets at the very beginning of a rumor, which improves classification performance over time, significantly within the first 10 hours.
An approach that leverages Convolutional Neural Networks to learn the credibility of individual tweets and aggregates the predictions to obtain the overall event credibility, which is then combined with a time series based rumor classification model for improved early-stage rumor detection performance.
Twitter's friend recommendation algorithm results in less politically homogeneous personal networks compared to social endorsement-based network growth, but still structurally resembles echo chambers. Accounts using the recommendation system also have lower potential exposure to false and misleading election-related content.
HyperGraphDis is a novel hypergraph-based method that effectively captures the intricate social structures, user relationships, and semantic/topical nuances to accurately and efficiently detect disinformation on social media platforms like Twitter.
Self-presentation afforded by Twitter user profiles can shape perceptions of alignments between non-political and political identifiers, contributing to subjective social sorting.
An ensemble method is proposed for detecting social media bots across multiple platforms, including Twitter, Reddit, and Instagram, by training specialized classifiers for different user data fields and aggregating their outputs.
The core message of this article is to introduce a "Calibrate-Extrapolate" framework for efficiently processing and analyzing content to estimate the prevalence of toxic comments on social media platforms, using a pre-trained black box classifier.