Conceptos Básicos
PROM introduces a new PhRase-level cOpying Mechanism to enhance attention on n-grams, improving factuality and performance in abstractive summarization.
Resumen
PROM is a novel method that enhances phrase copying in abstractive summarization, showing significant improvements in fine-tuning and zero-shot settings. The method contributes to faithfulness, entity coverage, and human evaluation, demonstrating its effectiveness across diverse datasets.
Based on the remarkable achievements of pre-trained language models in abstractive summarization, the copying mechanism has proved helpful by improving the factuality, stability, and overall performance. This work proposes PROM, a new PhRase-level cOpying Mechanism that enhances attention on n-grams, which can be applied to zero-shot summarization with pre-training. PROM adds an indicator layer to explicitly pick up tokens in n-gram that can be copied from the source and calculates an auxiliary loss for the copying prediction. Empirical studies show that PROM makes significant improvements in fine-tuning on benchmarks. In the zero-shot setting, PROM is utilized in the self-supervised pre-training on raw corpora and provides new general baselines on a wide range of summarization datasets. Further analysis shows that PROM performs more reasonable copying and contributes to faithfulness.
The copying method represents a compromise of extraction and generation, alleviating the problems of inconsistency. The consistency or faithfulness of abstractive summarization remains to be improved. Intrinsic reasons lie in the inherent imperfection of models such as exposure bias while extrinsic reasons may be because of excessive confidence of the language model leading to unfaithful summaries. The copying method computes a copying distribution on the source sequence and then aggregates the copying distribution and the language model distribution. Thus unfamiliar tokens can be directly copied or ignored.
Summarization also has to face data bottleneck issues where high-quality summaries are usually human-generated but show diversity. Language models require large amounts of data for supervised fine-tuning. Copying methods allow an alternative to picking up tokens from the source sequence coping with expressions which the model is unfamiliar with.
Estadísticas
Empirical studies show that PROM makes significant improvements in fine-tuning.
Our model surpasses all previous copying methods.
Our model shows advantages on recall but little difference on precision.
Human evaluation results show that our model significantly wins BART in faithfulness.
Zero-shot results indicate that our method can achieve better scores with lead bias.
Citas
"The proposed PROM encourages phrase-level copying for enhanced attention on n-grams."
"PROM surpasses previous methods by providing significant improvements in both supervised fine-tuning and zero-shot settings."