Conceptos Básicos
Selecting appropriate in-context demonstrations is crucial for enhancing the performance of large language models (LLMs) in passage ranking tasks. The proposed DemoRank framework addresses this challenge by introducing a demonstration retriever and a dependency-aware demonstration reranker to iteratively select the most suitable demonstrations for few-shot in-context learning.
Resumen
The paper introduces the DemoRank framework for improving passage ranking using large language models (LLMs) through effective demonstration selection.
Key highlights:
- Passage ranking using LLMs has gained significant interest, but few studies have explored how to select appropriate in-context demonstrations for this task.
- Existing methods use LLM's feedback to train a retriever for demonstration selection, but they ignore the dependencies between demonstrations, leading to suboptimal performance.
- DemoRank proposes a two-stage "retrieve-then-rerank" approach. It first trains a demonstration retriever (DRetriever) using LLM's feedback on individual demonstrations. Then, it introduces a dependency-aware demonstration reranker (DReranker) to iteratively select the most suitable demonstrations for few-shot in-context learning.
- To address the challenges in training the DReranker, the authors propose an efficient method to construct dependency-aware training samples and a list-pairwise training approach.
- Extensive experiments on diverse ranking datasets demonstrate the effectiveness of DemoRank, especially in few-shot scenarios. Further analysis shows its strong performance under different settings, including limited training data, varying demonstration numbers, unseen datasets, and different LLM rankers.
Estadísticas
"Tea helps with weight loss ..."
"Tea is popular worldwide ..."
"Tea originated in China ..."
"Weight loss and digestion ..."
Citas
"Selecting appropriate in-context demonstrations is crucial for enhancing the performance of large language models (LLMs) in passage ranking tasks."
"To overcome these challenges, we propose an efficient approach to construct a kind of dependency-aware training samples."
"Based on these training samples, we further design a novel list-pairwise training approach which compares a pair of lists that only differ in the last demonstration, to teach the reranker how to select the next demonstration given a previous sequence."