The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications have drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice. It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification. Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL). This raises several research questions. What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions.
翻译:近年来,神经网络越来越受欢迎,在现实世界应用中日益流行,这使人们注意到其核查的重要性。虽然人们知道核查在理论上很难计算,但实际上提出了许多解决技术。文献指出,默认神经网络很少满足我们想要核查的逻辑限制。一个很好的行动方针是培训给定的神经网络在核实这些网络之前满足上述限制。这种想法有时被称为持续核查,指的是培训与核查之间的循环。通常,通过将某种正式逻辑语言翻译成损失功能来进行有限制的培训。这些损失功能随后被用于培训神经网络。因为为了培训目的,这些功能需要不同,这些功能被称为不同的逻辑(DL)。这引起了几个研究问题。什么是不同的逻辑?在持续核查的背景下,对DL的具体选择有什么不同?从由此产生的损失功能的角度来看待DL的理想标准是什么?在这种广泛的抽象中,我们将讨论和回答这些问题。