Optimizing Retrieval Models to Personalize Large Language Models through Retrieval Augmentation
This paper proposes two optimization methods, ROPG-RL and ROPG-KD, that leverage feedback from the downstream language model to train retrieval models for personalizing large language models. The paper also introduces RSPG-Pre and RSPG-Post, retrieval model selection approaches that choose the most appropriate retrieval model for each input to further improve personalized text generation.