In this paper, we present a novel method to constrain invexity on Neural Networks (NN). Invex functions ensure every stationary point is global minima. Hence, gradient descent commenced from any point will lead to the global minima. Another advantage of invexity on NN is to divide data space locally into two connected sets with a highly non-linear decision boundary by simply thresholding the output. To this end, we formulate a universal invex function approximator and employ it to enforce invexity in NN. We call it Input Invex Neural Networks (II-NN). We first fit data with a known invex function, followed by modification with a NN, compare the direction of the gradient and penalize the direction of gradient on NN if it contradicts with the direction of reference invex function. In order to penalize the direction of the gradient we perform Gradient Clipped Gradient Penalty (GC-GP). We applied our method to the existing NNs for both image classification and regression tasks. From the extensive empirical and qualitative experiments, we observe that our method gives the performance similar to ordinary NN yet having invexity. Our method outperforms linear NN and Input Convex Neural Network (ICNN) with a large margin. We publish our code and implementation details at github.
翻译:在本文中,我们提出了一个限制神经网络内隐含性的新方法。 Invex 函数确保每个固定点都是全球迷你。 因此, 从任何点开始的梯度下降将会导致全球迷你。 NN 的惯性的另一个好处是将本地数据空间分为两个连接的数据集, 高度非线性决定界限, 简单地设定输出的阈值。 为此, 我们开发了一个通用的 Invex 函数相近程序, 并使用它来强制 NNW 中的内含性。 我们称它为 Invex 神经网络(II- NN) 。 我们首先将数据与已知的Inx 函数相匹配, 之后与 NNN 进行修改, 比较梯度的方向, 并惩罚NNNN 的梯度方向, 如果它与 内含参考函数的方向相悖。 为了惩罚我们所执行的Graddient Clipped 重度惩罚(GC-GGP) 的梯度方向。 我们从广泛的实验和定性实验中将我们的方法应用于现有的 Nex 的 Nex 格式(我们所使用的方法与普通的NIS Nex 版本) 版本版本的版本的版本比重数据。