Leveraging contrastive examples, including positive and negative instances, can significantly improve the performance of large language models in generating responses that are better aligned with user preferences.
A novel method, Plug and Play with Prompts (PPP), that utilizes prompt tuning to steer the generation of text by large language models in a data and parameter efficient manner.
This research aims to comprehensively evaluate and compare various prominent decoding methodologies utilized in text generation with a pre-trained GPT-2 model. The study endeavors to establish a set of metrics to identify the most efficacious decoding technique, which can also serve as a tool for adversarial attacks on text classification models.
LLMRefine, an inference-time optimization method, iteratively refines the output of large language models using a learned fine-grained feedback model to pinpoint defects and guide the refinement process.
Decomposing grounded text generation tasks into subtasks, focusing on content fusion in a multi-document setting.
TEncDM introduces a novel approach for text generation using diffusion models trained in the latent space of language model encodings, showcasing superior performance over existing models.
The author introduces TEncDM, a novel approach that utilizes language model encodings for text generation. By analyzing self-conditioning and decoder design, the author demonstrates the superiority of TEncDM over existing models.