Distilling Knowledge from Large Language Models to Empower Lightweight Sequential Recommenders
Leveraging knowledge distillation, this work empowers lightweight conventional sequential recommendation models to match or even surpass the performance of complex large language model-based recommenders, while maintaining low inference latency.