Logic-based machine learning aims to learn general, interpretable knowledge in a data-efficient manner. However, labelled data must be specified in a structured logical form. To address this limitation, we propose a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FFNSL), that integrates a logic-based machine learning system capable of learning from noisy examples, with neural networks, in order to learn interpretable knowledge from labelled unstructured data. We demonstrate the generality of FFNSL on four neural-symbolic classification problems, where different pre-trained neural network models and logic-based machine learning systems are integrated to learn interpretable knowledge from sequences of images. We evaluate the robustness of our framework by using images subject to distributional shifts, for which the pre-trained neural networks may predict incorrectly and with high confidence. We analyse the impact that these shifts have on the accuracy of the learned knowledge and run-time performance, comparing FFNSL to tree-based and pure neural approaches. Our experimental results show that FFNSL outperforms the baselines by learning more accurate and interpretable knowledge with fewer examples.
翻译:以逻辑为基础的机器学习旨在以数据效率的方式学习一般的、可解释的知识。然而,标签数据必须以结构化逻辑形式具体指定。为解决这一局限性,我们提议一个神经-听觉学习框架,称为Feed-Forward神经-Symblic Learker(FFNSL),这个框架将基于逻辑的机器学习系统整合起来,能够通过神经网络从吵闹的事例中学习,以便从有标签的无结构数据中学习可解释的知识。我们展示FFFFFNSL在四个神经-听觉分类问题上的一般性,在四个神经-听觉分类问题上,不同的事先训练的神经网络模型和基于逻辑的机器学习系统被整合在一起,从图像序列中学习可解释的知识。我们通过使用分布式移动的图像来评估我们框架的稳健性,为此,预先训练的神经网络可能错误地和充满信心地预测。我们分析这些转变对所学知识的准确性和运行时间性的影响,将FFFFNSLSL与基于树木和纯神经方法进行比较。我们的实验结果表明,FFFFFFFFFNSLSLSL通过学习较少的事例超越基线,从而以较少的知识超越了基线。