Deep Neural Networks (DNNs) outshine alternative function approximators in many settings thanks to their modularity in composing any desired differentiable operator. The formed parametrized functional is then tuned to solve a task at hand from simple gradient descent. This modularity comes at the cost of making strict enforcement of constraints on DNNs, e.g. from a priori knowledge of the task, or from desired physical properties, an open challenge. In this paper we propose the first provable affine constraint enforcement method for DNNs that requires minimal changes into a given DNN's forward-pass, that is computationally friendly, and that leaves the optimization of the DNN's parameter to be unconstrained i.e. standard gradient-based method can be employed. Our method does not require any sampling and provably ensures that the DNN fulfills the affine constraint on a given input space's region at any point during training, and testing. We coin this method POLICE, standing for Provably Optimal LInear Constraint Enforcement.
翻译:深神经网络(DNNS) 在许多环境中超越了光的替代功能, 因为他们在组合任何想要的不同操作器时的模块性, 在许多环境中都比其他功能相近。 形成成形的平衡功能随后被调整, 以解决简单的梯度下降引起的一项任务。 这种模块性的代价是严格执行对 DNS 的限制, 例如事先了解任务, 或者期望的物理属性, 是一个公开的挑战。 在本文中, 我们提议了第一个对 DNNS 进行最小修改的、 需要对给定 DNNN 的前方通道进行最小修改, 这种方法在计算上是友好的, 使 DNN 参数的优化不受限制, 也就是说, 标准梯度方法可以使用。 我们的方法不需要任何取样, 并且可以确保 DNN 在培训、 测试期间的任何时候对特定输入空间区域都达到缝合限制。 我们找到了这个方法, 以最优化的优化 LInstraint 执行方式站立 。