Core Concepts
Designing a novel framework for utilizing Large Language Models to address text-rich sequential recommendation challenges.
Abstract
The content discusses the challenges faced by Large Language Models (LLMs) in handling text-rich recommendation scenarios and proposes a novel framework, LLM-TRSR, to address these challenges. The framework involves segmenting user behavior sequences, employing an LLM-based summarizer for preference extraction, and fine-tuning an LLM-based recommender using Supervised Fine-Tuning (SFT) techniques. Experimental results on two datasets demonstrate the effectiveness of the approach.
Abstract:
- Recent advances in Large Language Models (LLMs) have impacted Recommender Systems.
- Challenges with text-rich recommendation scenarios include length limitations and computational overheads.
- Proposed framework, LLM-TRSR, utilizes LLM-based summarizer and recommender for effective recommendations.
Introduction:
- LLMs like ChatGPT have shown capabilities in Natural Language Processing.
- Application of LLMs in Recommender Systems involves feeding user profiles and behavioral data.
Problem Formulation:
- Given a user 𝑢 with historical behavior sequence S, goal is to estimate click probability of candidate item 𝐼𝑐.
Technical Details:
- Framework involves segmenting user behavior sequences and employing LLM-based summarizer for preference extraction.
- Two paradigms for summarization: hierarchical and recurrent.
Experiments:
- Conducted experiments on Amazon-M2 and MIND datasets.
- Results show superiority of LLM-TRSR over baseline methods in Recall@K and MRR@K metrics.
Stats
Recent advances in Large Language Models (LLMs) have been changing the paradigm of Recommender Systems (RS).
Existing LLMs typically impose limitations on the length of the input, e.g., 1,024 tokens for GPT-2 [27].
We conduct experiments on two public datasets to demonstrate the effectiveness of our approach.
Quotes
"To this end, in this paper, we design a novel framework for harnessing Large Language Models for Text-Rich Sequential Recommendation (LLM-TRSR)."
"We also use Low-Rank Adaptation (LoRA) for Parameter-Efficient Fine-Tuning (PEFT)."