Evaluating Decoding Strategies for Generating Coherent and Relevant Text with Pre-Trained GPT-2 Model
This research aims to comprehensively evaluate and compare various prominent decoding methodologies utilized in text generation with a pre-trained GPT-2 model. The study endeavors to establish a set of metrics to identify the most efficacious decoding technique, which can also serve as a tool for adversarial attacks on text classification models.