Deep Neural Networks (DNNs) outshine alternative function approximators in many settings thanks to their modularity in composing any desired differentiable operator. The formed parametrized functional is then tuned to solve a task at hand from simple gradient descent. This modularity comes at the cost of making strict enforcement of constraints on DNNs, e.g. from a priori knowledge of the task, or from desired physical properties, an open challenge. In this paper we propose the first provable affine constraint enforcement method for DNNs that only requires minimal changes into a given DNN's forward-pass, that is computationally friendly, and that leaves the optimization of the DNN's parameter to be unconstrained, i.e. standard gradient-based method can be employed. Our method does not require any sampling and provably ensures that the DNN fulfills the affine constraint on a given input space's region at any point during training, and testing. We coin this method POLICE, standing for Provably Optimal LInear Constraint Enforcement. Github: https://github.com/RandallBalestriero/POLICE
翻译:深心神经网络(DNNS) 在许多环境中超越了光的替代功能, 因为他们在组合任何想要的不同操作者时的模块性, 在许多环境中的替代功能比普通的替代功能相近。 已经形成的半调功能随后被调整, 以解决简单的梯度梯度下降的任务 。 这种模块性是以严格执行对 DNS 的限制( 例如,先验地了解任务, 或期望的物理属性) 的代价为代价的。 在本文中, 我们为 DNS 提出了一个首个可辨别的软性约束执行方法, 只需要对给定的 DNNS 前方通道进行最小的修改, 且该前方通道在计算上是友好的, 使 DNN 参数的优化不受控制, 也就是说, 标准的梯度方法可以使用。 我们的方法不需要任何取样, 并且可以确保 DNNE 在培训和测试的任何时候对给定的输入空间区域都达到某种近似的束缚性限制。 我们提出这个方法 IPE, 支持 Provitimmealimal LIntarental Constrain Constrating comptriment 执行 。 Gith: https://gistriab.</s>