We consider the problem of training a deep neural network with nonsmooth regularization to retrieve a sparse and efficient sub-structure. Our regularizer is only assumed to be lower semi-continuous and prox-bounded. We combine an adaptive quadratic regularization approach with proximal stochastic gradient principles to derive a new solver, called SR2, whose convergence and worst-case complexity are established without knowledge or approximation of the gradient's Lipschitz constant. We formulate a stopping criteria that ensures an appropriate first-order stationarity measure converges to zero under certain conditions. We establish a worst-case iteration complexity of $\mathcal{O}(\epsilon^{-2})$ that matches those of related methods like ProxGEN, where the learning rate is assumed to be related to the Lipschitz constant. Our experiments on network instances trained on CIFAR-10 and CIFAR-100 with $\ell_1$ and $\ell_0$ regularizations show that SR2 consistently achieves higher sparsity and accuracy than related methods such as ProxGEN and ProxSGD.
翻译:我们考虑的问题是,如何训练一个没有毛肿的深度神经网络,以便取回一个稀疏而高效的子结构。 我们的常规化器仅假定为低度半连续和偏移。 我们把适应性四级正规化办法与近似随机梯度原则结合起来,以产生一个新的求解器,称为SR2, 其趋同和最坏情况的复杂性是在没有了解或接近梯度的Lipschitz常数的情况下建立的。 我们制定了停止标准,确保适当的一阶固定性措施在某些条件下会达到零。 我们设定了美元(mathcal{O}(epslon}-2})最差的越代法, 与普罗克根(ProxGEN)等相关方法相匹配, 其学习率假定与Lipschitz常数相关。 我们在CIFAR-10和CIFAR-100(CFAR-100) 的网络实验用1美元和 美元=0美元进行的实验显示,SR2在一定程度上达到了比ProxGEN和ProxSGD等相关方法更高的快速和准确度。