Unsupervised pretraining framework SFAVEL achieves state-of-the-art results in fact verification by distilling language model features.
The author proposes SFAVEL, an unsupervised pretraining framework leveraging language model distillation to achieve high-quality claim-fact alignments without annotations. Results show state-of-the-art performance on fact verification datasets.