As machine learning models, specifically neural networks, are becoming increasingly popular, there are concerns regarding their trustworthiness, specially in safety-critical applications, e.g. actions of an autonomous vehicle must be safe. There are approaches that can train neural networks where such domain requirements are enforced as constraints, but they either cannot guarantee that the constraint will be satisfied by all possible predictions (even on unseen data) or they are limited in the type of constraints that can be enforced. In this paper, we present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions. The approach builds on earlier work where learning linear models is formulated as a constraint satisfaction problem (CSP). To make this idea applicable to neural networks, two crucial new elements are added: constraint propagation over the network layers, and weight updates based on a mix of gradient descent and CSP solving. Evaluation on various machine learning tasks demonstrates that our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
翻译:由于机器学习模式,特别是神经网络越来越受欢迎,人们对其可靠性,特别是安全关键应用中的可靠性,特别是自主车辆的行动必须是安全的,存在一些办法可以培训神经网络,将这种领域要求作为限制加以执行,但它们要么不能保证所有可能的预测(甚至根据看不见的数据)都能够满足这种限制,要么它们可以执行的制约种类有限;在本文件中,我们提出了一个培训神经网络的方法,这种网络可以施加各种各样的限制,并保证所有可能的预测都能够满足这种限制;该方法建立在早期工作的基础上,即将学习线性模型拟订成一个约束性满意度问题(CSP)。为使这一想法适用于神经网络,增加了两个关键的新要素:限制网络层的传播,以及根据梯度下降和CSP的混合解决而进行重量更新。对各种机器学习任务的评价表明,我们的方法足够灵活,可以强制执行广泛的领域限制,并且能够在神经网络中保证这些限制。</s>