It is well known that neural networks with rectified linear units (ReLU) activation functions are positively scale-invariant. Conventional algorithms like stochastic gradient descent optimize the neural networks in the vector space of weights, which is, however, not positively scale-invariant. This mismatch may lead to problems during the optimization process. Then, a natural question is: \emph{can we construct a new vector space that is positively scale-invariant and sufficient to represent ReLU neural networks so as to better facilitate the optimization process }? In this paper, we provide our positive answer to this question. First, we conduct a formal study on the positive scaling operators which forms a transformation group, denoted as $\mathcal{G}$. We show that the value of a path (i.e. the product of the weights along the path) in the neural network is invariant to positive scaling and prove that the value vector of all the paths is sufficient to represent the neural networks under mild conditions. Second, we show that one can identify some basis paths out of all the paths and prove that the linear span of their value vectors (denoted as $\mathcal{G}$-space) is an invariant space with lower dimension under the positive scaling group. Finally, we design stochastic gradient descent algorithm in $\mathcal{G}$-space (abbreviated as $\mathcal{G}$-SGD) to optimize the value vector of the basis paths of neural networks with little extra cost by leveraging back-propagation. Our experiments show that $\mathcal{G}$-SGD significantly outperforms the conventional SGD algorithm in optimizing ReLU networks on benchmark datasets.
翻译:众所周知, 具有修正线性单位( ReLU) 激活功能的神经网络是积极的 比例变化性 。 常规算法, 如随机梯度梯度下移, 优化载体重量空间的神经网络, 但并不是正比例变化性 。 这种不匹配可能导致优化过程中出现问题 。 然后, 一个自然的问题是 : 我们建造一个新的矢量空间, 其规模变化性且足以代表 ReLU 神经网络, 从而更好地促进优化进程 }? 在本文中, 我们给出了这一问题的肯定答案。 首先, 我们对构成变异组的积极缩放操作器进行了正式研究, 以美元表示为比例。 我们显示, 一条路径( 沿路径重量的产物产值), 并证明, 所有路径的值的值值代表着恒定值值值 。 其次, 我们的底基路径路径路径路径的路径路径, 以美元Gral= 水平值显示, 以正比例值显示, 基底空间网络的值值值为正位值 。