Core Concepts
This research paper introduces SEPTA, a novel framework that enhances commonsense question answering by retrieving relevant knowledge subgraphs from a knowledge graph using a graph-text alignment technique.
Stats
SEPTA improves performance by 6.54% and 6.09% on IHdev and IHtest of CommonsenseQA compared to fine-tuned RoBERTa.
Compared to the GSC method, SEPTA improves by 2.00% and 0.70% on OpenBookQA using RoBERTa and AristoRoBERTa, respectively.
SEPTA outperforms DHLK on both CommonsenseQA and OpenBookQA datasets and DRAGON on OpenBookQA.
Removing the graph-text alignment from SEPTA results in the most significant performance drop, decreasing accuracy by 4.95% and 5.13% on CommonsenseQA and OpenBookQA, respectively.
In low-resource settings with only 5% of training data, SEPTA achieves significantly better results than other baselines on both CommonsenseQA and OpenBookQA.