toplogo
Sign In

One Style Does Not Regulate All: A Qualitative Study of Moderation Practices in Public and Private WhatsApp Groups in India and Bangladesh


Core Concepts
Volunteer moderation styles on WhatsApp vary greatly between public and private groups, influenced by factors like social ties and cultural norms, demanding nuanced design solutions beyond a one-size-fits-all approach.
Abstract

This research paper investigates the moderation practices employed by administrators in both public and private WhatsApp groups situated in India and Bangladesh. The authors conducted semi-structured interviews with 32 admins and observed 30 public groups to understand the challenges of content moderation on an end-to-end encrypted platform.

Research Objective:

The study aimed to explore how WhatsApp group admins exercise care and control when dealing with problematic content and identify potential improvements for volunteer moderation on the platform.

Methodology:

The researchers employed a qualitative approach, conducting semi-structured interviews with 23 private and 9 public group admins in India and Bangladesh. They also observed user activities and admin responses to problematic content in 30 public WhatsApp groups. The study utilized Baumrind's typology of parenting styles as a lens to analyze the observed moderation practices.

Key Findings:

  • Admins in private groups, particularly family and friends groups, often adopted a permissive moderation style, prioritizing social harmony over strict content control.
  • Authoritative moderation, characterized by clear rules and communication, was more prevalent in private groups with weaker social ties, such as educational or professional groups.
  • Public groups exhibited either authoritarian moderation, with admins restricting group interaction, or an uninvolved approach, with admins neglecting their moderation responsibilities.
  • The study highlighted the influence of cultural factors, particularly in collectivist societies like India and Bangladesh, where offline relationships significantly impact online moderation decisions.

Main Conclusions:

The authors argue that a one-size-fits-all approach to moderation is inadequate for WhatsApp, given the diverse range of group dynamics and cultural contexts. They recommend designing tools that empower admins while ensuring accountability and a balance of power.

Significance:

This research provides valuable insights into the complexities of volunteer moderation on end-to-end encrypted platforms, particularly within the context of the Global South. The findings have implications for designing more effective moderation tools and policies that consider the diversity of user needs and cultural norms.

Limitations and Future Research:

The study acknowledges limitations regarding the demographic scope and the focus on India and Bangladesh. Future research could explore moderation practices in other geographical regions and cultural contexts. Additionally, investigating the effectiveness of the proposed design recommendations would be beneficial.

edit_icon

Customize Summary

edit_icon

Rewrite with AI

edit_icon

Generate Citations

translate_icon

Translate Source

visual_icon

Generate MindMap

visit_icon

Visit Source

Stats
WhatsApp is the largest social media platform in the Global South. WhatsApp has the second largest active social media user base globally. India has the world’s largest WhatsApp userbase. The researchers interviewed admins of 32 diverse WhatsApp groups. The researchers reviewed content from 30 public WhatsApp groups in India and Bangladesh.
Quotes
"Although my aunt created the group, she became busy with household chores and kids and made me an admin instead." "Earlier only the elders in our housing society could become admins. But, they were not tech-savvy and couldn’t understand all the features of WhatsApp. Then they recruited us because I always have Internet connectivity and check the group actively." "I made others admin so that they could add their acquaintances to my group instead of forming new groups. This will help my group grow bigger and popular." "A Hindu colleague left our office’s WhatsApp group due to hate speech. When we noticed, we decided to apologize to him in person instead of contacting him online given the severity of the matter. After meeting, we requested him to rejoin the group." "We don’t add any unknown people to our group. We only add those who are affiliated with the press or are able to provide news materials." "When somebody wants to join the group we ask for their building and apartment numbers. We have a list of contact details for all residents in our housing society. We dial the corresponding apartment to verify if the person actually lives there." "In our group, people share religiously charged posts that would blame the Muslims for the 2019 Delhi riot, the 2020 COVID pandemic, the 2023 Odisha train collision, or almost anything that might go wrong in this country." "Every year during Durga Puja [Bengali Hindu religious festival] there are posts with anti-Hindu sentiment, that would blame the Hindus for disrespecting the Quran, the Prophet, or Muslim women to justify violence against them." "People think writing on WhatsApp is safe. I doubt if WhatsApp’s encryption would work in Bangladesh given the country’s strict digital law against anti-government content. The government might trace such messages on WhatsApp and accuse admins." "After clicking on a spam link shared in the group, many group members’ Facebook accounts got hacked. Some girls’ sensitive photos were leaked and when we informed our teachers, they filed a cybercrime police complaint. The police interrogated everyone, recovered the hacked accounts, and asked the admins to disable the group." "Recently while everyone was paying respect to a deceased colleague, someone shared a joke without paying attention to the ongoing conversation. This is insincere and displays a lack of common sense." "Communal hate speech has been normalized in India over the years and none has the time or energy to protest such content. Most group members just care about staying connected with college friends instead of constantly arguing with them." "During COVID many group members blamed Muslims for the rise of COVID in India. This triggered not only Muslim but other considerate group members from different religions, who decided to give up networking opportunities instead of being in groups that discriminated against people for their religion." "Sometimes people intentionally share phishing links in the group. If I don’t notice, other classmates will call them out as spams and question the group member who shared that content." "Previous admins had fights because the elderly admin argued that non-blood relative should not send too many messages. But, the younger admin disagreed and was forced to leave the group." "Since all members in our group are women healthcare workers and the other co-admins are men, it’s appropriate that I [a female admin] deal with the group affairs."

Deeper Inquiries

How can social media platforms balance the need for content moderation with the preservation of user privacy, especially in end-to-end encrypted environments?

This is a significant challenge with no easy solutions. Here's a breakdown of the complexities and potential approaches: The Challenge: Encryption vs. Oversight: End-to-end encryption (E2EE) means that only the sender and recipient can see the content of messages, not even the platform itself. This protects user privacy but makes it technically impossible for platforms to proactively monitor and moderate content. Freedom of Expression vs. Harmful Content: A balance must be struck between allowing users to express themselves freely and protecting users from harmful content like hate speech, misinformation, and incitement to violence. Scalability: The sheer volume of content generated on social media platforms makes manual moderation impractical. Automated solutions are necessary but come with their own set of challenges. Potential Approaches: Focus on Metadata and User Reports: While respecting E2EE, platforms can analyze unencrypted metadata (e.g., group names, frequency of messages, user join/leave patterns) to identify potentially problematic groups. They can also encourage and facilitate user reporting mechanisms, making it easier for users to flag harmful content. On-Device AI and Federated Learning: Platforms can explore using on-device artificial intelligence (AI) to detect harmful content locally on users' devices without transmitting the actual content to the platform's servers. Federated learning, where AI models are trained across multiple devices without sharing the underlying data, is another promising avenue. Transparency and User Control: Platforms should be transparent about their moderation policies and the limitations imposed by E2EE. Providing users with more control over their own experience, such as allowing them to mute keywords or users, can also be empowering. Community-Based Moderation: Platforms can empower and support volunteer moderators (admins) by providing them with better tools and resources. This can include training materials, mechanisms for reporting and escalating issues, and access to aggregated, anonymized data about group activity. Collaboration and Industry Standards: Addressing the challenges of content moderation in E2EE environments requires collaboration between platforms, researchers, policymakers, and civil society organizations. Developing industry standards and best practices can help ensure a consistent and responsible approach. It's important to note that there is no single solution that will perfectly balance these competing priorities. A multi-faceted approach that combines technical solutions, community engagement, and policy interventions is likely to be most effective.

Could providing admins with more nuanced moderation tools, beyond simply banning or deleting content, empower them to address problematic content more effectively while preserving group harmony?

Absolutely. Providing admins with a wider range of moderation tools can significantly improve their ability to address problematic content in a more nuanced and context-aware manner. Here are some examples: Nuanced Moderation Tools: Temporary Muting: Instead of permanently banning a user for a first-time offense, admins could temporarily mute them for a set period, allowing them to cool down and reflect on their behavior. Content Warnings: Admins could flag potentially sensitive content with warnings, allowing users to choose whether or not they want to view it. Message Pinning: Admins could pin important messages, such as group rules or clarifications, to the top of the chat, ensuring visibility and reducing repeat offenses. Automated Rule Enforcement: Admins could set up automated rules to flag or remove content that violates group guidelines, such as excessive use of profanity or sharing of links from blacklisted domains. Mediation Features: Platforms could provide built-in features to facilitate conflict resolution, such as anonymous reporting options or tools for structured dialogue between admins and users. Benefits of Nuanced Moderation: Preserving Group Harmony: By offering alternatives to outright bans, admins can address problematic behavior without alienating group members or creating a hostile environment. Promoting Positive Group Culture: Nuanced tools can help admins foster a more respectful and inclusive group culture by encouraging constructive dialogue and discouraging harmful behavior. Reducing Admin Burden: Automated tools and features can help alleviate the workload on volunteer admins, allowing them to focus on more complex moderation tasks. Enhancing User Agency: Giving users more control over their own experience, such as the ability to mute specific keywords or users, can empower them to curate their own online environment. By providing admins with a more diverse toolkit, platforms can empower them to be more effective and proactive moderators, fostering healthier and more engaging online communities.

How might the evolving landscape of online communities and social interactions continue to shape the future of content moderation and the role of volunteer moderators?

The digital landscape is constantly evolving, and this evolution will undoubtedly impact content moderation and the role of volunteer moderators in the future. Here are some key trends and their potential implications: 1. The Metaverse and Virtual Worlds: New Forms of Content: The metaverse will introduce new forms of user-generated content, such as 3D objects, virtual environments, and immersive experiences, requiring new moderation strategies. Real-Time Interactions: The real-time nature of interactions in virtual worlds will demand more immediate and responsive moderation, potentially relying heavily on AI and automated systems. Blurring of Realities: The lines between online and offline behavior may become increasingly blurred in the metaverse, raising new ethical and social considerations for content moderation. 2. Decentralized Social Networks: Distributed Moderation: Decentralized platforms, built on blockchain technology, may shift moderation responsibilities away from centralized entities and towards communities themselves. Algorithmic Transparency: Demands for greater transparency in content moderation algorithms are likely to increase, particularly in decentralized environments where users have more control. Community Ownership: Decentralized platforms could empower communities to take ownership of their own moderation policies and practices, leading to more diverse and context-specific approaches. 3. Artificial Intelligence and Automation: Enhanced Detection: AI will continue to play a crucial role in content moderation, with more sophisticated algorithms capable of detecting subtle forms of hate speech, misinformation, and harassment. Ethical Considerations: As AI becomes more involved in moderation, addressing biases in algorithms and ensuring human oversight will be critical to prevent unfair or discriminatory outcomes. Hybrid Approaches: The future of content moderation is likely to involve hybrid approaches that combine AI-powered detection with human review and community-based moderation. 4. Evolving Social Norms: Cultural Context: Content moderation policies and practices will need to be sensitive to the cultural context of diverse online communities, recognizing that what is considered acceptable can vary widely. User Education: Platforms will need to prioritize user education and awareness campaigns to help users understand community guidelines and the impact of harmful content. Collaborative Solutions: Addressing the challenges of content moderation in the future will require collaboration between platforms, policymakers, researchers, and civil society organizations. The role of volunteer moderators will continue to be essential in this evolving landscape. Platforms that empower and support their volunteer moderators with the necessary tools, resources, and recognition will be better positioned to foster healthy and thriving online communities.
0
star