人間の読解プロセスに基づいたマルチホップ質問応答のための明示的および暗黙の知識を促す
Pre-trained language models (PLMs) leverage chains-of-thought (CoT) to simulate human reasoning and inference processes, achieving proficient performance in multi-hop QA. However, a gap persists between PLMs’ reasoning abilities and those of humans when tackling complex problems.