toplogo
Увійти
ідея - Social Media Analysis - # Subjective social sorting of personal identifiers and political identities on Twitter

Subjective Social Sorting on Twitter: How Self-Presentation Shapes Perceptions of Political Alignment


Основні поняття
Self-presentation afforded by Twitter user profiles can shape perceptions of alignments between non-political and political identifiers, contributing to subjective social sorting.
Анотація

The study examines how self-presentation on Twitter user profiles may contribute to subjective social sorting along political lines. Key findings:

  • There was a substantial increase in the number of Twitter users publicly defining themselves using anti-establishment right identities (e.g. "MAGA", "Trump") between 2016-2018, accompanied by more modest growth in left and pro-establishment right identities.

  • Approximately 9.2% of non-political identifiers (social identities, preferences, affiliations) significantly aligned with political identities, reinforcing existing associations, revealing unexpected relationships, and reflecting online/offline events.

  • Certain types of identifiers, like religion, activism, and family, exhibited strong bias towards one political side or the other, while others like sports and technology were more bridging.

  • Temporal changes in alignments reflected broader shifts in party identification, as holders of anti-establishment orientations joined and emerged from the Republican party.

  • The relatively small set of Twitter users with overt political signals have disproportionate influence due to their high activity and large follower counts compared to non-political users.

edit_icon

Налаштувати зведення

edit_icon

Переписати за допомогою ШІ

edit_icon

Згенерувати цитати

translate_icon

Перекласти джерело

visual_icon

Згенерувати інтелект-карту

visit_icon

Перейти до джерела

Статистика
"Christian" (LR: 64.39**, EST: 40.85**), "God" (LR: 43.34**, EST: -2.48), "Jesus" (LR: 22.69**, EST: 4.19), "Catholic" (LR: 20.16**, EST: 20.16**) "dj" (LR: -21.53**, EST: -21.58**), "producer" (LR: -20.27**, EST: -18.37**) "military" (LR: 28.66**, EST: -2.84), "veteran" (LR: 21.99**, EST: 7.23*), "police" (LR: 14.02*, EST: -3.16) "blacklivesmatter" (LR: -22.72**, EST: 0.16), "alllivesmatter" (LR: 9.71*, EST: -5.36*), "bluelivesmatter" (LR: 18.11**, EST: -11.25**) "feminist" (LR: -49.22**, EST: 6.53**), "environmentalist" (LR: -16.69*, EST: -1.679), "resistance" (LR: 16.52*, EST: 0.79) "mom" (LR: 1.15, EST: 15.55**), "grandmother" (LR: 6.63, EST: 8.80*), "sister" (LR: -6.74, EST: 6.17*) "lgbt" (LR: -17.8**, EST: 2.94), "gay" (LR: -16.51**, EST: 1.45), "she" (LR: -11.71*, EST: 0.83), "her" (LR: -8.36*, EST: 2.36) "american" (LR: 35.35**, EST: 7.015*), "israel" (LR: 25.86**, EST: 3.086), "texan" (LR: 12.54*, EST: 9.67*) "patriot" (LR: 39.38**, EST: 3.66), "freedom" (LR: 22.30**, EST: 2.02), "liberty" (LR: 20.41**, EST: 3.06) "deplorable" (LR: 36.47**, EST: -24.79**), "covfefe" (LR: 13.32*, EST: -12.16**) "tgdn" (LR: 12.32*, EST: 9.04*)
Цитати
"deplorable" "covfefe" "tgdn"

Глибші Запити

How do users' perceptions of alignments between non-political and political identifiers on Twitter translate to their real-world attitudes and behaviors towards different political groups?

Users' perceptions of alignments between non-political and political identifiers on Twitter can have significant implications for their real-world attitudes and behaviors towards different political groups. When users consistently see certain non-political identifiers co-occurring with specific political identities in profiles, it can reinforce their associations between those identifiers and political groups. This can lead to the formation of stereotypes and biases based on these perceived alignments. For example, if users frequently see identifiers related to a certain social movement or belief system alongside a particular political identity, they may start to associate that movement or belief system with the political group, even if the actual overlap is not as significant. These perceptions can influence how users interact with individuals or groups who hold different political beliefs. If a user perceives that certain non-political identifiers are strongly aligned with a political group they oppose, they may be more likely to avoid or engage negatively with individuals displaying those identifiers. This can contribute to increased polarization, as users may be less willing to engage in constructive dialogue or seek common ground with those they perceive as belonging to opposing political groups. In the real world, these perceptions can manifest in behaviors such as selective social interactions, echo chambers, and even discriminatory actions based on perceived political affiliations. Users may be more inclined to seek out information and engage with individuals who align with their perceived political identities, leading to a reinforcement of existing beliefs and attitudes. This can further entrench divisions between political groups and hinder efforts to foster understanding and cooperation across ideological lines.

How do bots and coordinated influence campaigns contribute to the observed patterns of subjective social sorting on Twitter, and how can this be disentangled from organic user behavior?

Bots and coordinated influence campaigns can significantly contribute to the observed patterns of subjective social sorting on Twitter by amplifying certain narratives, promoting specific ideologies, and artificially inflating the visibility of certain political identities and affiliations. These actors can strategically manipulate the presentation of non-political and political identifiers in user profiles to create the perception of alignment between certain groups and beliefs. By artificially boosting the visibility and prominence of specific identifiers, bots and coordinated campaigns can influence how users perceive the relationships between different identities and political groups. Disentangling the impact of bots and coordinated influence campaigns from organic user behavior can be challenging but is essential for understanding the true dynamics of subjective social sorting on Twitter. One approach is to analyze patterns of activity and engagement to identify suspicious behaviors that may indicate bot activity, such as high-frequency posting, repetitive content, and interactions with a large number of accounts in a short period. Additionally, examining the source of information and the dissemination patterns of certain narratives can help differentiate between organic user-generated content and artificially amplified messaging. Platforms can also implement advanced detection algorithms and moderation strategies to identify and mitigate the influence of bots and coordinated campaigns. By monitoring user behavior, content patterns, and network interactions, platforms can detect and remove inauthentic accounts and content that contribute to subjective social sorting. Transparency in content moderation practices and increased user education on identifying misinformation and manipulation can also help users distinguish between organic and manipulated content.

What are the long-term cognitive and social impacts of exposure to subjective social sorting on social media, and how can platform design mitigate potential harms while preserving the benefits of self-expression?

The long-term cognitive and social impacts of exposure to subjective social sorting on social media can be profound, influencing individuals' perceptions, attitudes, and behaviors both online and offline. Constant exposure to content that reinforces perceived alignments between non-political and political identifiers can contribute to the formation of echo chambers, confirmation biases, and polarization. This can lead to a narrowing of perspectives, reduced empathy towards those with differing views, and an increased likelihood of engaging in hostile or divisive interactions with others. In the long term, exposure to subjective social sorting can contribute to the entrenchment of ideological divides, the erosion of trust in diverse perspectives, and the amplification of group-based animosities. This can have detrimental effects on societal cohesion, democratic discourse, and the ability to engage in constructive dialogue and problem-solving across political differences. Platform design plays a crucial role in mitigating the potential harms of subjective social sorting while preserving the benefits of self-expression and diverse discourse. Platforms can implement features that promote diverse content exposure, such as algorithmic transparency, content moderation that targets misinformation and harmful content, and tools that encourage users to engage with a variety of perspectives. By fostering a culture of respectful dialogue, promoting critical thinking skills, and providing resources for media literacy, platforms can empower users to navigate subjective social sorting and engage thoughtfully with diverse viewpoints. Additionally, platform designers can prioritize user well-being by creating spaces that prioritize civil discourse, empathy, and understanding. Features that encourage constructive interactions, facilitate meaningful connections across ideological lines, and promote fact-based discussions can help counteract the negative impacts of subjective social sorting. By fostering a culture of inclusivity, openness, and mutual respect, platforms can create environments that support healthy dialogue and bridge divides in a polarized digital landscape.
0
star