The paper studies the impact of the conditional independence assumption commonly used in neurosymbolic learning models. It shows that this assumption leads to several issues:
Bias towards deterministic solutions: The minima of the semantic loss function under the independence assumption correspond to distributions that deterministically assign values to some variables. This prevents the model from representing uncertainty over multiple valid options.
Non-convex and disconnected minima: The set of possible independent distributions that minimize the semantic loss is characterized using tools from logic and computational topology. It is shown to be non-convex and disconnected in general, making the optimization problem challenging.
The paper provides a theoretical analysis to justify recent experimental findings that more expressive perception models outperform conditionally independent ones on neurosymbolic tasks. It highlights the need for neurosymbolic learning methods that can properly represent uncertainty without sacrificing tractability.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Emile van Kr... kl. arxiv.org 04-15-2024
https://arxiv.org/pdf/2404.08458.pdfDybere Forespørgsler