核心概念
The author introduces the Question-Attended Span Extraction (QASE) module to enhance generative language models for Machine Reading Comprehension (MRC), improving answer quality and factual consistency, surpassing leading LLMs like GPT-4.
要約
The study addresses challenges in generative models for MRC by introducing QASE, enhancing answer generation quality. Results show significant improvements with QASE across various datasets. The model outperforms extractive methods and leading LLMs like GPT-4. QASE boosts performance without a significant increase in computational costs.
Key points include:
- Introduction of QASE module to guide text generation in fine-tuned PLMs.
- Comparison of QASE-enhanced models with vanilla fine-tuned models on multiple datasets.
- Demonstrated improvement in answer quality, factual consistency, and performance metrics.
- Ablation studies showing the superiority of QASE architecture over baseline span extraction modules.
- Evaluation of model's ability to leverage real-world knowledge and improve context-based answer generation.
統計
SQuAD: 83.16 | 90.71 (EM | F1)
MultiSpanQA: 67.41 | 83.09 (EM F1 | Overlap F1)
Quoref: 75.17 | 80.49 (EM | F1)
引用
"QASE enhances generative PLMs to match or exceed the capabilities of SOTA extractive models."
"Improves context-based answer generation and application of pre-existing real-world knowledge."