toplogo
Log på

Disambiguating Pejorative Epithets to Improve Misogyny Detection in Italian Tweets


Kernekoncepter
Disambiguating the meaning of pejorative words can help improve the detection of misogynistic language in Italian tweets.
Resumé

The authors introduce pejorative word disambiguation as a preliminary step for misogyny detection, aiming to reduce the error rate of classification models on polysemic words that can serve as pejorative epithets. They build a lexicon of polysemic words with both pejorative and neutral connotations and use it to compile a novel corpus of 1,200 manually annotated Italian tweets for pejorative word disambiguation and misogyny detection.

The authors evaluate two approaches to inject pejorativity information into a misogyny detection model: concatenation and substitution. The results show that the disambiguation of potentially pejorative words leads to notable classification improvements on their corpus and two benchmark corpora in Italian. The authors also analyze the word embedding representation of the AlBERTo model and show that the encoding of lexicon words is closer to their ground-truth connotation after fine-tuning.

Furthermore, the authors qualitatively analyze several off-the-shelf instruction-tuned large language models on pejorative word disambiguation, showing that there is ample room for improvement in this task.

edit_icon

Tilpas resumé

edit_icon

Genskriv med AI

edit_icon

Generer citater

translate_icon

Oversæt kilde

visual_icon

Generer mindmap

visit_icon

Besøg kilde

Statistik
"Misogyny is often expressed through figurative language. Some neutral words can assume a negative connotation when functioning as pejorative epithets." "State-of-the-art models struggle to correctly classify misogyny when sentences contain such terms." "The occurrence of polysemic words with a pejorative connotation in the training set and a neutral connotation in the test set results in a great number of false positives."
Citater
"Pejorative language refers to a word or phrase that has negative connotations and is intended to disparage or belittle." "An inoffensive word becoming pejorative is a form of semantic drift known as pejoration; thus, pejorativity is context-dependent: pejorative words have one primary neutral meaning, and another negatively connotated meaning."

Vigtigste indsigter udtrukket fra

by Aria... kl. arxiv.org 04-04-2024

https://arxiv.org/pdf/2404.02681.pdf
PejorativITy

Dybere Forespørgsler

How can the proposed approach be extended to other languages and cultures to conduct cross-cultural analyses on pejorative terms for misogyny

To extend the proposed approach to other languages and cultures for cross-cultural analyses on pejorative terms for misogyny, several steps can be taken. Firstly, a comprehensive study of the specific language and culture in question is essential to identify the prevalent pejorative terms used against women. This involves compiling a lexicon of polysemic words with both pejorative and neutral connotations, similar to the approach taken in the Italian context. Next, a corpus of tweets or text data containing these identified pejorative terms can be collected and annotated by experts in linguistics, gender studies, and NLP from the respective language and culture. The annotations should cover pejorative word disambiguation at the word level and misogyny detection at the sentence level, similar to the methodology followed in the Italian study. Once the corpus is annotated, the same pipeline of training a model for pejorative word disambiguation and leveraging this information for misogyny detection can be applied. The model can be fine-tuned on the specific language data to ensure accurate disambiguation of pejorative terms. The enriched input data can then be used to enhance the performance of the model targeting misogyny detection. Cross-cultural analyses can then be conducted by comparing the results and performance of the models across different languages and cultures. This comparative analysis can provide insights into the variations in pejorative terms used for misogyny and the effectiveness of the disambiguation approach in different contexts.

What other techniques, beyond word substitution and concatenation, could be explored to effectively leverage pejorative word disambiguation for misogyny detection

Beyond word substitution and concatenation, several other techniques can be explored to effectively leverage pejorative word disambiguation for misogyny detection: Contextual Embeddings Analysis: In addition to word-level disambiguation, analyzing contextual embeddings of sentences containing pejorative terms can provide valuable insights. Models can be trained to understand the context in which pejorative terms are used and how they contribute to the overall sentiment of the sentence. Semantic Role Labeling: Incorporating semantic role labeling techniques can help identify the role of pejorative terms within a sentence. This can provide a deeper understanding of how these terms are used and their impact on the overall meaning of the sentence. Syntax Analysis: Analyzing the syntactic structure of sentences containing pejorative terms can reveal patterns in how these terms are used and their relationship with other words in the sentence. This can help in better disambiguating the meaning of pejorative epithets. Knowledge Graph Integration: Integrating knowledge graphs or external resources that provide information about the connotations of words can enhance the disambiguation process. By leveraging external knowledge sources, models can make more informed decisions about the pejorative nature of specific terms. Multi-task Learning: Training models on multiple related tasks such as sentiment analysis, hate speech detection, and pejorative word disambiguation simultaneously can improve the overall understanding of the context in which pejorative terms are used.

How can instruction-tuned large language models be further improved to better understand and disambiguate pejorative epithets in context

Instruction-tuned large language models can be further improved to better understand and disambiguate pejorative epithets in context through the following strategies: Fine-tuning on Pejorative Disambiguation: Training instruction-tuned models specifically on the task of pejorative word disambiguation can enhance their ability to differentiate between neutral and pejorative connotations of words. By providing explicit training data and feedback on pejorative terms, the models can learn to better interpret and classify such terms. Data Augmentation: Augmenting the training data with a diverse set of examples containing pejorative terms used in different contexts can help instruction-tuned models generalize better. By exposing the models to a wide range of scenarios, they can learn to disambiguate pejorative epithets more effectively. Contextual Understanding: Incorporating mechanisms for capturing and analyzing the context in which pejorative terms are used can improve the models' ability to understand the nuances of language. This can involve attention mechanisms that focus on specific parts of the input text related to the pejorative term. Bias Mitigation: Implementing techniques to mitigate biases in the training data and model predictions is crucial for improving the accuracy of instruction-tuned models in disambiguating pejorative terms. Strategies such as debiasing algorithms and fairness constraints can help address biases in the model's understanding of pejorative language.
0
star