M3 introduces a novel approach to dense sentence retrieval, combining multi-task learning and mixed-objective frameworks for improved performance in fact verification.
Zusammenfassung
Introduction
Open-domain fact verification is a challenging task requiring multi-hop evidence extraction.
Traditional retrieval methods face limitations in capturing semantics beyond lexical matching.
Dense Text Retrieval
Contrastive learning aims to distinguish relevant from non-relevant document-query pairs.
Recent studies focus on dense passage retrieval based on sentence-level evidence.
Multi-hop Text Retrieval
Multi-hop retrieval is essential for complex question-answering tasks.
MDR and AdMIRaL are iterative approaches improving recall on the FEVER dataset.
Method
M3 uses an iterative retrieve-and-rerank scheme for evidence retrieval.
Dense sentence retrieval and reranking methods are crucial components of the system.
Experimental Setup
Evaluation metrics include recall@5, Label Accuracy (LA), and FEVER score on the FEVER dataset.
Results
M3 outperforms state-of-the-art models in label accuracy and FEVER score on the blind test set.
Analysis
Multi-task learning improves dense sentence representation quality in M3.