Numerous neuro-symbolic approaches have recently been proposed typically with the goal of adding symbolic knowledge to the output layer of a neural network. Ideally, such losses maximize the probability that the neural network's predictions satisfy the underlying domain. Unfortunately, this type of probabilistic inference is often computationally infeasible. Neuro-symbolic approaches therefore commonly resort to fuzzy approximations of this probabilistic objective, sacrificing sound probabilistic semantics, or to sampling which is very seldom feasible. We approach the problem by first assuming the constraint decomposes conditioned on the features learned by the network. We iteratively strengthen our approximation, restoring the dependence between the constraints most responsible for degrading the quality of the approximation. This corresponds to computing the mutual information between pairs of constraints conditioned on the network's learned features, and may be construed as a measure of how well aligned the gradients of two distributions are. We show how to compute this efficiently for tractable circuits. We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles, observing that it improves upon the baselines while sidestepping intractability.
翻译:最近提出了许多神经共振方法,目的通常是将象征性知识添加到神经网络的输出层中。理想的情况是,这种损失使神经网络的预测满足基本领域的可能性最大化。不幸的是,这种概率推论往往是计算不可行的。因此,神经共振方法通常使用这个概率目标的模糊近似值,牺牲声音的概率性地震学,或很少可行的取样方法。我们通过首先假定以网络所学特征为条件的制约分解方法来处理这个问题。我们反复加强我们的近似,恢复对降低近距离质量最负责任的限制因素之间的依赖性。这相当于计算以网络所学特点为条件的制约对等的相互信息,可以被解释为衡量两种分布的梯度的相匹配程度。我们展示了如何以可感动电路节节节节能为高效率地进行这种比较。我们测试了我们的方法有三项任务:预测作战中的最低成本路径,预测在最接近性基线上最精确的比对准,同时测量一个最接近性基线。</s>