Core Concepts
Proposing the EDDA framework for improved zero-shot stance detection through encoder-decoder data augmentation.
Abstract
The content introduces the EDDA framework for zero-shot stance detection, addressing limitations of existing data augmentation methods. It outlines the encoder-decoder process, rationale-enhanced network, and experimental results showcasing significant improvements over state-of-the-art techniques.
Introduction
Stance detection aims to determine attitudes in text towards a target.
Zero-shot stance detection (ZSSD) classifies stances towards unseen targets.
Methodology
EDDA framework leverages large language models for if-then rationales and syntactic diversity.
Experimental Setup
Experiments on benchmark datasets demonstrate substantial performance improvements with EDDA.
Results Analysis
EDDA outperforms baselines and enhances LLMs' performance in ZSSD tasks.
Comparison with Baselines
EDDA significantly improves other ZSSD models when integrated.
Conclusion
The proposed EDDA framework shows promise in enhancing zero-shot stance detection.
Stats
Recent data augmentation techniques have limitations in ZSSD.
Experiments show substantial improvements with the proposed EDDA framework.
Quotes
"We propose an encoder-decoder data augmentation (EDDA) framework."
"Our approach substantially improves over state-of-the-art ZSSD techniques."