This paper introduces DeMod, a novel tool designed to assist social media users in censoring toxic content before posting. Recognizing the limitations of existing toxicity detection tools that primarily focus on identification, the authors conducted a needfinding study on Weibo, a popular Chinese social media platform. The study revealed users' desire for a more comprehensive tool that not only detects toxicity but also provides explanations and suggests modifications.
Based on these findings, the authors developed DeMod, a ChatGPT-enhanced tool with three key modules: User Authorization, Explainable Detection, and Personalized Modification. The Explainable Detection module utilizes ChatGPT to provide fine-grained detection results, highlighting specific toxic keywords and offering immediate explanations for their toxicity. Additionally, it simulates audience reactions to the post, providing users with insights into potential social impact. The Personalized Modification module leverages ChatGPT's few-shot learning capabilities to suggest revisions that detoxify the content while preserving the user's intended meaning and personal language style.
The authors implemented DeMod as a third-party tool for Weibo and conducted evaluations with 35 participants. Results demonstrated DeMod's effectiveness in detecting and modifying toxic content, outperforming baseline methods. Participants also praised its ease of use and appreciated the comprehensive functionality, particularly the dynamic explanation and personalized modification features.
The paper concludes by highlighting the importance of holistic censorship tools that go beyond simple detection. The authors emphasize the need for interpretability in both the detection process and results, empowering users to understand and regulate their online behavior.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Yaqiong Li, ... at arxiv.org 11-05-2024
https://arxiv.org/pdf/2411.01844.pdfDeeper Inquiries