Severin, N., Ziablitsev, A., Savelyeva, Y., Tashchilin, V., Bulychev, I., Yushkov, M., ... & Makarov, I. (2024). LLM-KT: A Versatile Framework for Knowledge Transfer from Large Language Models to Collaborative Filtering. arXiv preprint arXiv:2411.00556.
This paper introduces LLM-KT, a framework designed to enhance the performance of collaborative filtering (CF) models by transferring knowledge from large language models (LLMs). The authors aim to address the limitations of existing LLM-based recommendation methods that are often restricted to context-aware models.
LLM-KT operates by generating user preference profiles using LLMs, embedding these profiles into a dense vector representation, and then training the CF model to reconstruct these embeddings within a specific internal layer. This process allows the CF model to learn from the LLM-generated knowledge without altering its architecture. The framework is evaluated on two benchmark datasets, MovieLens and Amazon CDs and Vinyl, using various CF models, including NeuMF, SimpleX, and MultVAE. The performance is measured using ranking metrics like NDCG@K, Hits@K, and Recall@K for general CF models and AUC-ROC for context-aware models.
The experiments demonstrate that LLM-KT consistently improves the performance of all tested CF models across different scenarios. Notably, LLM-KT achieves comparable results to state-of-the-art methods like KAR in context-aware settings while being applicable to a broader range of CF models that do not inherently support input features.
LLM-KT offers a versatile and effective approach for integrating LLM-derived knowledge into CF models, enhancing their accuracy and applicability. The framework's flexibility and ease of integration make it a valuable tool for researchers and practitioners seeking to leverage LLMs for improved recommendation systems.
This research contributes to the growing field of LLM-enhanced recommendation systems by proposing a novel framework that overcomes limitations of existing methods. LLM-KT's model-agnostic approach expands the potential of LLMs in recommendation tasks, paving the way for more sophisticated and personalized recommendation systems.
While LLM-KT shows promising results, future research could explore alternative architectures and loss functions for knowledge transfer. Additionally, investigating the framework's effectiveness in other recommendation domains, such as sequential recommendations, would be beneficial.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Nikita Sever... at arxiv.org 11-04-2024
https://arxiv.org/pdf/2411.00556.pdfDeeper Inquiries