Efficient and Diverse Paraphrase Generation Using Sequence-Level Knowledge Distillation
This research presents a novel approach to developing smaller, more efficient models for generating high-quality and diverse paraphrases by leveraging sequence-level knowledge distillation from a large language model.