The University of Glasgow Terrier team participated in the TREC 2023 Deep Learning track to explore new generative approaches to retrieval and validate existing approaches. They investigated generative query reformulation (Gen-QR) and generative pseudo-relevance feedback (Gen-PRF) using the FLAN-T5 language model, as well as conducting a deeper evaluation of adaptive re-ranking (GAR) on the MS MARCO-v2 corpus.
The team found that generative relevance feedback can transfer to a monoELECTRA cross-encoder and is further bolstered by adaptive re-ranking. They observed that while generative relevance feedback can be generally effective, the approach is sensitive to the form of the query, with performance degrading for conversational-style queries due to artifacts from instruction-tuning.
The team also found that with a sufficient compute budget and corpus graph size, a first-stage lexical model like BM25 can closely replicate the metric performance of a learned sparse retrieval model like SPLADE, with the rankings becoming increasingly correlated as the budget increases. This suggests that in cases where labeled data is unavailable or costly to collect, adaptive re-ranking can provide a compelling alternative to complex first-stage retrieval models.
The team's most effective run combined generative pseudo-relevance feedback and adaptive re-ranking, outperforming other approaches in terms of P@10 and nDCG@10. This highlights the potential of these techniques to improve passage retrieval performance.
Başka Bir Dile
kaynak içeriğinden
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Andrew Parry... : arxiv.org 05-03-2024
https://arxiv.org/pdf/2405.01122.pdfDaha Derin Sorular