מושגי ליבה
Large language models can be trained to be more logically consistent and factual by incorporating principled probabilistic reasoning into the training objective, without relying on external reasoning tools.
תקציר
The paper presents a method for training large language models (LLMs) to be more logically consistent and factual, without the need for external reasoning tools. The key ideas are:
-
Logical Consistency:
- The authors introduce a semantic loss function that penalizes the LLM for assigning truth values that are inconsistent with a set of logical constraints (implications).
- This encourages the LLM to perform principled probabilistic reasoning over the possible truth assignments during training.
-
Factuality:
- The authors embed factual information from a training set of ground facts into the logical constraints.
- This ensures the LLM's truth value predictions are consistent with the known facts.
-
Experiments:
- The authors evaluate their "LOCO-LMS" approach on the BeliefBank dataset, comparing it to a pre-trained Macaw-Large model and a baseline using an external reasoner (ConCoRD).
- LOCO-LMS outperform the baselines in terms of factuality and logical self-consistency, especially in low-data regimes.
- The authors also show that LOCO-LMS can generalize the learned logical structures to unseen entities.
Overall, the paper demonstrates that incorporating principled probabilistic reasoning into the training of LLMs can lead to more reliable and consistent language models, without the need for external reasoning tools.
סטטיסטיקה
LOCO-LMS fine-tuned on just the antecedent facts (T1) achieve 0.79 F1 on antecedents, 0.98 F1 on consequents, and 0.99 logical consistency.
With 5-10% of the full dataset (T1+T2), LOCO-LMS outperform standard fine-tuning in terms of logical consistency and factuality on consequents.
With 75% of the full dataset, LOCO-LMS and standard fine-tuning achieve comparable performance.
ציטוטים
"LOCO-LMS improve upon ConCoRD in terms of factuality and self-consistency in complex reasoning tasks, especially when queried on unseen facts."
"Probabilistic reasoning objectives can impose structure in a language model's conceptual space."