toplogo
Accedi

Targeted Hate Speech Detection on Vietnamese Social Media Texts


Concetti Chiave
A methodology to construct a system for targeted hate speech detection from online streaming texts on Vietnamese social media.
Sintesi

The paper introduces a new dataset called ViTHSD for targeted hate speech detection on Vietnamese social media texts. The dataset contains 10,000 comments, each labeled with specific targets (individuals, groups, religion/creed, race/ethnicity, politics) and three levels of hate speech (clean, offensive, hate).

The authors propose a baseline model that combines the Bi-GRU-LSTM-CNN architecture with pre-trained BERTology language models (BERT, XLM-R, PhoBERT, VELECTRA, ViSoBERT) to leverage the power of text representation. The models are evaluated on the target detection task and the target with level detection task, using metrics like precision, recall, and F1-score.

The results show that the XLM-R model performs the best on the target detection task, while the ViSoBERT model achieves the highest scores on the target with level detection task. The authors also propose a methodology to integrate the baseline model into an online streaming system for real-time detection of hateful comments on social media platforms.

The error analysis reveals that the model struggles with social media language, such as slang, abbreviations, and code-mixing, which can lead to misclassifications. The authors suggest incorporating lexicon normalization as a pre-processing step to address this challenge.

edit_icon

Personalizza riepilogo

edit_icon

Riscrivi con l'IA

edit_icon

Genera citazioni

translate_icon

Traduci origine

visual_icon

Genera mappa mentale

visit_icon

Visita l'originale

Statistiche
The ViTHSD dataset contains 10,000 comments, with an average length of 57.33 words. The dataset has a vocabulary size of 12,701 in the training set, 4,547 in the development set, and 5,684 in the test set. Most comments in the dataset have 1 or 2 targets, with a few having up to 5 targets. The majority of comments are focused on the individuals and groups targets, with the religion/creed target having the least number of comments.
Citazioni
"Toxic content and harmful speech are now very popular on the Internet. With the growth of social network users, toxic content will continue to spread rapidly." "Besides, the author in [6] introduces a solution for hate speech detection on social networks by the streaming approach using the online streaming platform, which helps the system process for real-time processing." "Hence, in this paper, we provide new datasets named ViTHSD that help to interpret hate speech in the comments."

Domande più approfondite

How can the proposed methodology be extended to handle code-mixing and other challenges in social media language?

The proposed methodology can be extended to handle code-mixing and other challenges in social media language by incorporating techniques specifically designed to address these issues. Code-mixing, which involves the mixing of two or more languages within a single sentence or conversation, is a common phenomenon in social media texts, especially in multilingual environments. To handle code-mixing, the model can be trained on a diverse dataset that includes code-mixed text to improve its ability to recognize and interpret such language patterns. Additionally, the model can be enhanced with language identification modules to identify the languages present in the text and adapt its processing accordingly. Furthermore, the model can benefit from the integration of sentiment analysis tools that are capable of understanding the nuances of sentiment in code-mixed text. This can help in accurately detecting hate speech that may be expressed in a code-mixed form. Pre-processing steps such as normalization and tokenization can also be tailored to handle code-mixed text effectively. In addition to code-mixing, other challenges in social media language, such as slang, abbreviations, and informal language, can be addressed by incorporating specialized lexicons and dictionaries into the model. These resources can help the model better understand and interpret the informal language used in social media texts. Moreover, the model can be trained on a diverse range of social media data to capture the variability and complexity of language used in online conversations.

How can the targeted hate speech detection approach be applied to other low-resource languages to address the global challenge of online hate speech?

The targeted hate speech detection approach can be applied to other low-resource languages to address the global challenge of online hate speech by following a similar methodology with adaptations for specific linguistic characteristics and cultural contexts. Here are some key steps to apply this approach to other languages: Dataset Creation: Develop a targeted hate speech detection dataset for the specific language, considering the unique aspects of hate speech in that language. This dataset should include annotations for different targets and levels of hate speech. Model Training: Train machine learning models, such as BERT-based models or deep neural networks, on the annotated dataset to learn the patterns of hate speech in the language. Fine-tune the models to optimize performance for the targeted hate speech detection task. Evaluation and Validation: Evaluate the models using appropriate metrics like Precision, Recall, and F1-score to assess their performance. Validate the models on diverse datasets to ensure robustness and generalizability. Ethical Considerations: Consider the ethical implications of deploying hate speech detection systems in different cultural contexts. Ensure that the models are sensitive to cultural nuances and biases to avoid misclassification or unfair targeting. Continuous Improvement: Continuously update and refine the models based on feedback and new data to enhance their accuracy and effectiveness in detecting hate speech in the target language. By following these steps and adapting the methodology to suit the linguistic and cultural specifics of each low-resource language, the targeted hate speech detection approach can be effectively applied to address the global challenge of online hate speech in diverse linguistic environments.

What are the potential ethical and privacy concerns in deploying a real-time hate speech detection system on social media platforms?

Deploying a real-time hate speech detection system on social media platforms raises several ethical and privacy concerns that need to be carefully addressed: Freedom of Speech: There is a fine line between hate speech and freedom of speech. The system must be designed to accurately differentiate between the two to avoid censoring legitimate expressions of opinion. Bias and Fairness: Machine learning models can inherit biases from the data they are trained on, leading to discriminatory outcomes. It is crucial to mitigate bias and ensure fairness in the detection process. Privacy: Analyzing user-generated content in real-time raises privacy concerns as it involves monitoring and analyzing individuals' online activities. Safeguards must be in place to protect user privacy and data security. Transparency: Users should be informed about the use of hate speech detection systems on social media platforms and how their data is being processed. Transparency in the system's operation is essential for building trust with users. Accountability: Clear guidelines and policies should be established for handling detected hate speech, including mechanisms for appealing decisions and addressing false positives. Impact on Marginalized Communities: Hate speech detection systems should be sensitive to the impact on marginalized communities and avoid exacerbating existing inequalities or targeting vulnerable groups. Data Retention and Storage: Proper data management practices should be implemented to ensure that user data is securely stored and only used for the intended purpose of hate speech detection. Algorithmic Decision-Making: The automated nature of hate speech detection systems can lead to unintended consequences. Regular monitoring and human oversight are necessary to prevent algorithmic errors. Addressing these ethical and privacy concerns is essential in the deployment of real-time hate speech detection systems on social media platforms to ensure that they operate responsibly and respect users' rights and freedoms.
0
star