מושגי ליבה
Contrastive learning can be an effective approach for detecting machine-generated text, even with a single model and without relying on the specific text generation model used.
תקציר
The paper describes a system developed for the SemEval-2024 Task 8, "Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection". The key challenges of the task were:
- The use of five different language models to generate the machine-generated text, making it difficult to rely on the specific model used for detection.
- The validation and test datasets being generated by a different model than the training data, requiring a generalized model.
To address these challenges, the authors propose a novel system based on contrastive learning:
- Data Augmentation: The authors used a paraphrasing model to generate alternate texts for each instance, creating positive and negative pairs for contrastive learning.
- Contrastive Learning: The authors used a shared encoder to generate embeddings for the positive and negative pairs, and optimized a contrastive loss function to learn meaningful representations.
- Classification Head: The authors added a simple two-layer classifier head on top of the learned embeddings to perform the final binary classification.
The authors show that their single model, which uses around 40% fewer parameters than the baseline, can achieve comparable performance on the test dataset. They also conduct an extensive ablation study to understand the impact of various hyperparameters, such as maximum sentence length, classification dropout, and effective batch size.
The key findings are:
- Contrastive learning with data augmentation can enable a single model to achieve comparable performance to an ensemble of models, without relying on the specific text generation model.
- The model can effectively identify machine-generated text even with documents as large as 256 words, demonstrating its adaptability.
- Reducing the classification dropout and using a smaller effective batch size can lead to further performance improvements.
The authors suggest future work could explore the use of more advanced contrastive loss functions and prompt-based data augmentation models.
סטטיסטיקה
The dataset provided in the shared task has text and their corresponding label.
The authors split each document into multiple sentences for paraphrasing, resulting in approximately 3.6 million sentences.
ציטוטים
"Our key finding is that even without an ensemble of multiple models, a single base model can have comparable performance with the help of data augmentation and contrastive learning."