핵심 개념
Leveraging the capabilities of Large Language Models (LLMs) to generate informative item profiles and aligning them with Graph Convolutional Network-based collaborative filtering representations for improved recommendation performance.
초록
The paper introduces a Prompting-Based Representation Learning Method for Recommendation (P4R) that aims to enhance recommendation performance by utilizing LLMs. The key aspects are:
-
Auxiliary Feature Extraction through In-context Learning:
- Proposes a recommendation-oriented prompting format to generate informative item profiles using LLMs.
- Categorizes textual information into intrinsic (item-specific) and extrinsic (user feedback) attributes to guide the LLM's reasoning.
-
Textual Embedding and Representation:
- Employs a pre-trained BERT model to extract semantic representations of the generated item profiles.
- Aligns the LLM-enhanced item embeddings with Graph Convolutional Network (GCN)-based collaborative filtering representations.
-
Alignment with Recommendation through GNN-based Approach:
- Incorporates the LLM-enhanced item embeddings into a GCN-based collaborative filtering framework.
- Optimizes the model using the Bayesian Personalized Ranking (BPR) loss function.
The authors evaluate the proposed P4R framework on the Yelp and Amazon-VideoGames datasets, and demonstrate its superior performance compared to state-of-the-art recommendation models. They also conduct ablation studies to analyze the impact of different design choices, such as the embedding size and the inclusion of LLM-enhanced item profiles.
통계
The Yelp dataset has 767 users, 3,647 items, and 27,453 interactions with a sparsity of 99.018571%.
The Amazon-VideoGames dataset has 795 users, 6,627 items, and 37,341 interactions with a sparsity of 99.291235%.
인용구
"Believing that a better understanding of the user or item itself can be the key factor in improving recommendation performance, we conduct research on generating informative profiles using state-of-the-art LLMs."
"The key advantage of incorporating PLMs into recommendation systems lies in their ability to extract high-quality representations of textual features and leverage the extensive external knowledge encoded within them."