The paper introduces a formalism for informed supervised classification tasks and techniques, and builds upon this formalism to define three abstract neurosymbolic techniques based on probabilistic reasoning: semantic conditioning, semantic regularization, and semantic conditioning at inference.
The paper then examines the asymptotic computational complexity of several classes of probabilistic reasoning problems that are often encountered in the neurosymbolic literature. It shows that probabilistic techniques can not scale on certain popular tasks found in the literature, whereas others thought intractable can actually be computed efficiently.
Specifically, the paper analyzes the complexity of probabilistic reasoning for the following logics:
The paper concludes by discussing possible future research directions, including a sharper understanding of semi-tractable logics and their practical consequences, exploring approximate methods for intractable prior knowledge, and expanding the formalism beyond supervised classification.
לשפה אחרת
מתוכן המקור
arxiv.org
שאלות מעמיקות