Deep neural networks are becoming increasingly popular in approximating arbitrary functions from noisy data. But wider adoption is being hindered by the need to explain such models and to impose additional constraints on them. Monotonicity constraint is one of the most requested properties in real-world scenarios and is the focus of this paper. One of the oldest ways to construct a monotonic fully connected neural network is to constrain its weights to be non-negative while employing a monotonic activation function. Unfortunately, this construction does not work with popular non-saturated activation functions such as ReLU, ELU, SELU etc, as it can only approximate convex functions. We show this shortcoming can be fixed by employing the original activation function for a part of the neurons in the layer, and employing its point reflection for the other part. Our experiments show this approach of building monotonic deep neural networks have matching or better accuracy when compared to other state-of-the-art methods such as deep lattice networks or monotonic networks obtained by heuristic regularization. This method is the simplest one in the sense of having the least number of parameters, not requiring any modifications to the learning procedure or steps post-learning steps.
翻译:深心神经网络在接近来自繁杂数据的任意功能方面越来越受欢迎。 但是,由于需要解释这些模型并对模型施加额外的限制,因此更广泛的采纳越来越受到阻碍。 单调限制是现实世界情景中最需要的特性之一,也是本文的重点。 建造一个完全连接的单调神经网络的最古老的方法之一是限制其重量,而同时使用单调激活功能。 不幸的是,这种构造与流行的非饱和激活功能(如ReLU、ELU、SELU、SELU等)不起作用,因为它只能接近 convex 功能。 我们显示,这种缺陷可以通过对层中一部分神经元使用最初的激活功能来固定,而将其点反射用于另一部分。 我们的实验表明,这种建设单调深心神经网络的方法与其他状态的激活功能相比,如深调网络或由高调正规化获得的单调网络等不相匹配或更精确性。 这种方法是最简单的一种方法,就是学习最起码的参数步骤, 不要求学习任何最起码的步骤。</s>