The authors study the task of retrieving a set of documents that covers various perspectives on a complex and contentious question (e.g., "Will ChatGPT do more harm than good?"). They curate a Benchmark for Retrieval Diversity for Subjective questions (BERDS), where each example consists of a question and diverse perspectives associated with the question, sourced from survey questions and debate websites.
The authors evaluate the performance of different retrievers (BM25, DPR, CONTRIEVER) paired with various corpora (Wikipedia, web snapshot, and corpus constructed on the fly with retrieved pages from the search engine) on the BERDS dataset. They find that existing retrievers struggle to surface documents covering all perspectives, even when retrieving from the web.
To enhance the diversity of the retrieval results, the authors implement simple re-ranking and query expansion approaches. The query expansion approach, which first generates multiple perspectives using a large language model and then uses them to guide the retrieval, shows strong gains on the dense base retriever (CONTRIEVER).
The authors further provide rich analysis, studying the coverage of each corpus, retriever sycophancy, and whether retrievers prefer supporting or opposing perspectives to the input query.
Para Outro Idioma
do conteúdo original
arxiv.org
Principais Insights Extraídos De
by Hung-Ting Ch... às arxiv.org 09-27-2024
https://arxiv.org/pdf/2409.18110.pdfPerguntas Mais Profundas