Differentiable logics (DL) have recently been proposed as a method of training neural networks to satisfy logical specifications. A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions. These loss functions can then be used during training with standard gradient descent algorithms. The variety of existing DLs and the differing levels of formality with which they are treated makes a systematic comparative study of their properties and implementations difficult. This paper remedies this problem by suggesting a meta-language for defining DLs that we call the Logic of Differentiable Logics, or LDL. Syntactically, it generalises the syntax of existing DLs to FOL, and for the first time introduces the formalism for reasoning about vectors and learners. Semantically, it introduces a general interpretation function that can be instantiated to define loss functions arising from different existing DLs. We use LDL to establish several theoretical properties of existing DLs, and to conduct their empirical study in neural network verification.
翻译:近期,可微逻辑(DL)被提出作为一种训练神经网络满足逻辑规范的方法。DL包括语法,规定了规范的声明形式,以及解释函数,将DL语法中的表达式转化为损失函数。在训练中,可使用这些损失函数与标准梯度下降算法结合使用。已有DL呈现多样性,并且因不同形式化水平的不一致性,致使对其属性和实现进行系统比较研究变得困难。本文建议一种称之为可微逻辑的逻辑(LDL) 的元语言,以解决此问题。从句法上讲,它将现有DL的语法推广到了一阶逻辑,首次介绍了关于向量和学习者推理的形式化。从语义上讲,它引入了一种通用的解释函数,可用于定义不同现有DL所产生的损失函数。我们使用LDL来建立现有DL的几个理论性质,并在神经网络验证中对其进行实证研究。