Sign In

Dehumanization Through Bot Accusations: Tracking the Shift in Twitter User Perceptions

Core Concepts
The term "bot" has transformed from referring to automated accounts exhibiting explicit automation behavior to becoming a political insult used to dehumanize and discredit conversation partners, especially in the context of controversial, polarizing debates.
The study explores the evolution of bot accusations on Twitter over time. In the early years (before 2017), users were mainly accused of being bots when they exhibited explicit signs of automation, such as spamming repetitive content or reaching follow/rate limits. However, since 2017, the meaning of bot accusations has shifted significantly. The authors find that bot accusations are now predominantly used as a dehumanizing insult, questioning the intelligence and right to express opinions of the accused users. These accusations often occur in the context of polarizing political debates around topics like elections, COVID-19, or Brexit. The study also shows a discrepancy between the bot definitions internalized by Twitter users and the operationalization of bots in academic bot detection methods like Botometer. While accounts accused of being bots had high Botometer scores in the early years, this correlation disappeared in later years as the accusations became more of a political insult rather than a reflection of actual automation. The findings have important implications for researchers interested in bot detection, as bot accusations on social media should not be naively used as a signal or ground-truth data for such methods. The study also highlights the need for future research on the impact of these dehumanizing bot accusations on individuals and how they can be effectively countered.
"follow me i follow you" "good morning! you deserve a fantastic day today!" "i'm a human i'm a human i'm a human" "oh dear, twitter says i'm not allowed to follow anymore people. what's that all about then?" "covid, vaccine, pandemic masks, virus" "election, fraud, stolen, insurrection" "trump, biden, president" "scotland, independence, boris, pm"
"troll, idiot, person, probably, moron" "definitely, troll, idiot, foreigner, fool, human, account, probably, shill, person, parody, moron, paid, robot, joke, tool, real, chinese, russian"

Deeper Inquiries

How do bot accusations impact the mental well-being and online experiences of the accused users?

The impact of bot accusations on the mental well-being and online experiences of the accused users can be significant. Being labeled as a bot can lead to feelings of dehumanization, as the term is often used as an insult to question the intelligence or authenticity of the accused user. This can result in emotional distress, feelings of alienation, and a sense of being unfairly targeted. Accused users may experience a loss of credibility and trust within their online communities, leading to social isolation and a negative impact on their self-esteem. Moreover, the toxic and polarized environment that often accompanies bot accusations can further exacerbate the negative effects on the mental well-being of the accused users, creating a hostile online experience.

How can the dehumanizing use of the term "bot" be effectively countered on social media platforms?

To counter the dehumanizing use of the term "bot" on social media platforms, several strategies can be implemented. Education and Awareness: Increasing awareness about the impact of dehumanizing language and the consequences of false accusations can help users understand the importance of respectful communication. Community Guidelines: Implementing and enforcing clear community guidelines that prohibit the use of dehumanizing language and insults can create a more positive and inclusive online environment. Moderation and Reporting: Encouraging users to report instances of dehumanizing language and providing effective moderation tools can help in identifying and addressing such behavior promptly. Empathy and Understanding: Promoting empathy and understanding among users can foster a culture of respect and tolerance, reducing the likelihood of dehumanizing language being used. Positive Reinforcement: Recognizing and rewarding positive interactions and constructive communication can encourage users to engage in respectful dialogue and discourage the use of dehumanizing terms.

What are the broader societal implications of the transformation of bot accusations from a technical term to a political insult?

The transformation of bot accusations from a technical term to a political insult has several broader societal implications. Polarization: The use of bot accusations as a political insult can contribute to increased polarization and division within society, as it reinforces an "us vs. them" mentality. Misinformation: The dehumanizing use of the term "bot" can further perpetuate misinformation and disinformation, as it distracts from substantive discussions and debates. Trust and Credibility: The widespread use of bot accusations as a political tool can erode trust and credibility in online discourse, making it challenging to distinguish between genuine and automated accounts. Freedom of Expression: The dehumanization of individuals through bot accusations can have a chilling effect on freedom of expression, as users may self-censor to avoid being targeted or labeled unfairly. Psychological Impact: The normalization of dehumanizing language in online interactions can have a negative psychological impact on individuals, contributing to a toxic online environment and potentially influencing offline behavior. These societal implications highlight the importance of addressing the misuse of bot accusations and promoting respectful and constructive dialogue in online spaces.