Core Concepts
Leveraging instruction-following LLMs for unsupervised passage reranking.
Abstract
Abstract:
Introduces INSTUPR, an unsupervised passage reranking method based on large language models (LLMs).
Utilizes instruction-following capabilities of LLMs for passage reranking without fine-tuning.
Employs soft score aggregation and pairwise reranking for effectiveness.
Introduction:
Deep learning methods like DPR have shown superior performance in information retrieval.
Passage reranking is crucial to enhance retrieval accuracy by ranking retrieved passages based on relevance to the query.
Related Work:
Dense passage retriever (DPR) framework encodes documents and queries into dense representations.
Previous work explored LLMs for passage reranking through fine-tuning or unsupervised methods.
Our Method:
INSTUPR leverages instruction-following LLMs for unsupervised passage reranking.
Soft relevance score aggregation technique enhances reranking performance.
Pairwise reranking scheme outperforms pointwise while being more computationally expensive.
Experiments:
Conducted on TREC DL19, DL20, and BEIR benchmarks using BM25 as the base retrieval method.
Results show INSTUPR outperforms UPR and achieves comparable performance to state-of-the-art methods.
Conclusion:
Proposes an instruction-based unsupervised passage reranking method leveraging LLMs effectively.
Soft score aggregation and pairwise reranking contribute to improved performance.
Stats
"Experimental results demonstrate that IN-STUPR outperforms unsupervised baselines as well as an instruction-tuned reranker."
"We instruct the LLMs to predict a relevance score from 1 to 5 using the Likert scale."
"Our proposed soft aggregation method significantly contributes to these improvements."