핵심 개념
Pre-trained language models (PLMs) leverage chains-of-thought (CoT) to simulate human reasoning and inference processes, achieving proficient performance in multi-hop QA. However, a gap persists between PLMs’ reasoning abilities and those of humans when tackling complex problems.
초록
本研究では、Prompting Explicit and Implicit knowledge(PEI)フレームワークを導入し、明示的および暗黙の知識を促すことで、人間の読解プロセスに合わせたマルチホップQAに取り組んでいます。このフレームワークは、入力パッセージを明示的な知識と考え、それらを使用して暗黙の知識を引き出すことで、人間の読解プロセスに適合したアプローチを提供します。実験結果は、PEIがHotpotQAで最先端技術と同等の性能を発揮し、暗黙の知識がモデルの推論能力向上に寄与することを確認しています。
통계
Pre-trained language models (PLMs) leverage chains-of-thought (CoT) to simulate human reasoning and inference processes.
Experimental results show that PEI performs comparably to the state-of-the-art on HotpotQA.
Ablation studies confirm the efficacy of our model in bridging and integrating explicit and implicit knowledge.
인용구
"Pre-trained language models (PLMs) leverage chains-of-thought (CoT) to simulate human reasoning and inference processes."
"Experimental results show that PEI performs comparably to the state-of-the-art on HotpotQA."
"Ablation studies confirm the efficacy of our model in bridging and integrating explicit and implicit knowledge."