We study the problem of combining neural networks with symbolic reasoning. Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference, limiting the scalability of PNL solutions. We introduce Approximate Neurosymbolic Inference (A-NeSI): a new framework for PNL that uses neural networks for scalable approximate inference. A-NeSI 1) performs approximate inference in polynomial time without changing the semantics of probabilistic logics; 2) is trained using data generated by the background knowledge; 3) can generate symbolic explanations of predictions; and 4) can guarantee the satisfaction of logical constraints at test time, which is vital in safety-critical applications. Our experiments show that A-NeSI is the first end-to-end method to scale the Multi-digit MNISTAdd benchmark to sums of 15 MNIST digits, up from 4 in competing systems. Finally, our experiments show that A-NeSI achieves explainability and safety without a penalty in performance.
翻译:我们研究了将神经网络与象征性推理相结合的问题。最近引入了“深宝Log”等概率性神经元学习框架(PNL),以进行指数-时间精确推算,限制PNL解决方案的可缩放性。我们引入了“近似神经元子共振推理(A-NeSI)”新框架:即使用神经网络进行可缩放近似近似推理的神经元网络的新框架。A-NeSI 1)在多元时间进行大致的推断,而不改变概率逻辑的语义;2)利用背景知识生成的数据进行培训;3)可以对预测作出象征性的解释;4)可以保证测试时间的逻辑限制得到满足,这对于安全关键应用至关重要。我们的实验表明,A-NeSI是将多位MNISTAdd基准从4增加到15个MNIST数字的首端方法。最后,我们的实验表明,A-NESI在操作中实现了可解释和安全性,而无需处罚。