Core Concepts
Adapting domain-specific knowledge into the model enhances few-shot table-to-text generation by bridging the gap between tabular data and text.
Abstract
Pretrained language models (PLMs) have limitations in bridging the gap between tabular data and text.
The Adapt-Knowledge-to-Generate (AKG) framework proposes injecting domain-specific knowledge to improve performance.
Extensive experiments on three datasets show superior fluency and accuracy compared to state-of-the-art approaches.
The modularized pretraining strategy of AKG enhances the model's ability to utilize domain-specific knowledge fully.
Stats
"Our model achieves superior performance in terms of both fluency and accuracy as judged by human and automatic evaluations."
"Compared to previous state-of-the-art approaches, our method achieves remarkable improvement in fluency and faithfulness of the generated contents."
Quotes
"The core insight of AKG is to adapt unlabeled domain-specific knowledge into the model."
"Our contributions can be summarized as proposing a novel framework for few-shot table-to-text generation."