Core Concepts
The author proposes a novel Deep Prompt Multi-task Network (DPMN) to address the limitations of existing abuse language detection methods by utilizing prompt-based learning and multi-task learning.
Abstract
The content introduces the challenges in detecting abusive language on social networks and presents the proposed DPMN approach. It discusses the importance of prompt-based learning, multi-task network architecture, and experimental results showing superior performance compared to existing methods.
The article highlights the evolution of abuse language detection from conventional machine learning to deep learning methods and large pre-trained language models like BERT. It emphasizes the need for more effective utilization of PLMs' knowledge through prompt-based learning.
Key components of DPMN include deep prompt tuning, light prompt tuning, task head based on Bi-LSTM and FFN, and multi-task learning. The experiments conducted on three public datasets demonstrate that DPMN outperforms state-of-the-art methods in detecting abusive language.
The study also includes ablation experiments to analyze the contributions of different components in DPMN, showcasing the effectiveness of deep continuous prompt learning. Additionally, convergence analysis and implementation details are provided to support the experimental findings.
Stats
Macro F1 scores of DPMN: 0.8384 (OLID), 0.9218 (SOLID), 0.8165 (AbuseAnalyzer)
Quotes
"It is essential to minimize the psychological toll on victims to stop hate crimes."
"Prompt tuning has been a great success for most natural language processing tasks."
"The proposed DPMN achieves excellent results in detecting abusive language."