This paper proposes two optimization methods, ROPG-RL and ROPG-KD, that leverage feedback from the downstream language model to train retrieval models for personalizing large language models. The paper also introduces RSPG-Pre and RSPG-Post, retrieval model selection approaches that choose the most appropriate retrieval model for each input to further improve personalized text generation.
The RoleCraft framework aims to enhance the role-playing capabilities of large language models by incorporating detailed character profiles, emotional annotations, and contextually coherent dialogue generation.
A novel Memory-injected approach using parameter-efficient fine-tuning (PEFT) and Bayesian Optimization to achieve personalized response generation from large language models.