The content discusses a novel approach called Memory-injected LLM Personalization (MiLP) to achieve personalized response generation from large language models (LLMs). The key highlights are:
Existing research has explored memory-augmented methods to prompt the LLM with pre-stored user-specific knowledge for personalized response generation. However, such paradigm is limited in its ability to capture fine-granularity information.
MiLP proposes a novel approach that injects memory directly into the LLM using parameter-efficient fine-tuning (PEFT) techniques, rather than storing it in a database. This allows the LLM to better understand and leverage the injected user-specific information.
MiLP also introduces a comprehensive search space and a Bayesian Optimization-based approach to identify the optimal configuration for personalized response generation, considering factors like the number of PEFT modules, their size, and the layers to inject.
Extensive experiments on three public datasets demonstrate that MiLP significantly outperforms existing memory-augmented and memory-based personalization approaches across various metrics, validating the effectiveness and superiority of the proposed method.
The authors also conduct ablation studies to analyze the impact of different components in the search space, highlighting the necessity of the comprehensive search approach.
The authors acknowledge the high computational requirements of MiLP and the potential impact of user content sparsity on the quality of generated responses, which are noted as limitations to be addressed in future work.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Kai Zhang,Li... at arxiv.org 04-05-2024
https://arxiv.org/pdf/2404.03565.pdfDeeper Inquiries