Recent work on neuro-symbolic inductive logic programming has led to promising approaches that can learn explanatory rules from noisy, real-world data. While some proposals approximate logical operators with differentiable operators from fuzzy or real-valued logic that are parameter-free thus diminishing their capacity to fit the data, other approaches are only loosely based on logic making it difficult to interpret the learned "rules". In this paper, we propose learning rules with the recently proposed logical neural networks (LNN). Compared to others, LNNs offer strong connection to classical Boolean logic thus allowing for precise interpretation of learned rules while harboring parameters that can be trained with gradient-based optimization to effectively fit the data. We extend LNNs to induce rules in first-order logic. Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable and can achieve comparable or higher accuracy due to their flexible parameterization.
翻译:最近关于神经 -- -- 共振逻辑编程的工作产生了一些很有希望的方法,这些方法可以从噪音、真实世界的数据中学习解释规则。虽然一些提案将逻辑操作者与来自无参数的模糊或实际价值逻辑的不同操作者相近,从而削弱了他们适应数据的能力,但其他方法只是粗略地基于逻辑,使得难以解释所学的“规则”。在本文件中,我们建议与最近提议的逻辑神经网络(LNN)制定学习规则。与其他文件相比,LNN提供与古典布林逻辑的紧密联系,从而允许对所学规则进行精确的解释,同时保留可以接受基于梯度优化培训的参数,以有效地适应数据。我们扩大LNNS的范围,以引导一阶逻辑规则。我们在标准基准任务方面的实验证实,LNN规则是高度可解释的,并且由于其灵活的参数化,可以实现可比或更高的准确性。