Concetti Chiave
CrossTune improves few-shot text classification by leveraging label descriptions and ChatGPT-generated data.
Sintesi
Large-scale language models (LLMs) are beneficial but require substantial resources for training or fine-tuning.
Current research focuses on adapting black-box models to downstream tasks using gradient-free prompt optimization.
CrossTune introduces a label-enhanced cross-attention network for few-shot text classification without prompt search.
Utilizes ChatGPT for in-context learning to generate additional training data and improve generalization.
Outperforms state-of-the-art methods by 5.7% on average, showcasing the effectiveness of the approach.
Statistiche
Training large-scale language models requires substantial computation resources.
Current research explores parameter-efficient adaptation to downstream tasks using black-box models and gradient-free prompt optimization.
CrossTune outperforms previous state-of-the-art methods by 5.7% on average in seven benchmark datasets.