Neural-symbolic approaches have recently gained popularity to inject prior knowledge into a learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neural-symbolic approaches is based on First-Order Logic to represent prior knowledge, relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neural-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, which has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. However, the proposed learning formulation extends the advantages of the cross-entropy loss to the general knowledge that can be represented by a neural-symbolic method. Therefore, the methodology allows the development of a novel class of loss functions, which are shown in the experimental results to lead to faster convergence rates than the approaches previously proposed in the literature.
翻译:最近人们越来越欢迎将先前的知识注入一个学习者,而不需要从数据中引入这种知识。这些方法有可能学习竞争性解决办法,同时大量减少受监督的数据数量。一大批神经-同步方法基于“正正正正逻辑”来代表先前的知识,使用模糊逻辑放松到不同的形式。本文表明,这些神经-正正正反学习任务的损失功能可以明确确定,因为选择了 t- 中枢生成器。当限于监督学习时,所展示的理论机器为流行的交叉恋种性损失提供了干净的理由,事实证明,这种损失能够更快地趋同,减少非常深的结构中逐渐消失的梯度问题。然而,拟议的学习公式将交叉恋性损失的优点扩大到可以用神经- 立正法代表的一般知识。因此,该方法允许发展一种新型的损失功能,在实验结果中显示,这种损失功能将导致比文献中先前提议的方法更快的趋同率。