The non-convexity of the artificial neural network (ANN) training landscape brings inherent optimization difficulties. While the traditional back-propagation stochastic gradient descent (SGD) algorithm and its variants are effective in certain cases, they can become stuck at spurious local minima and are sensitive to initializations and hyperparameters. Recent work has shown that the training of an ANN with ReLU activations can be reformulated as a convex program, bringing hope to globally optimizing interpretable ANNs. However, naively solving the convex training formulation has an exponential complexity, and even an approximation heuristic requires cubic time. In this work, we characterize the quality of this approximation and develop two efficient algorithms that train ANNs with global convergence guarantees. The first algorithm is based on the alternating direction method of multiplier (ADMM). It solves both the exact convex formulation and the approximate counterpart. Linear global convergence is achieved, and the initial several iterations often yield a solution with high prediction accuracy. When solving the approximate formulation, the per-iteration time complexity is quadratic. The second algorithm, based on the "sampled convex programs" theory, is simpler to implement. It solves unconstrained convex formulations and converges to an approximately globally optimal classifier. The non-convexity of the ANN training landscape exacerbates when adversarial training is considered. We apply the robust convex optimization theory to convex training and develop convex formulations that train ANNs robust to adversarial inputs. Our analysis explicitly focuses on one-hidden-layer fully connected ANNs, but can extend to more sophisticated architectures.
翻译:人工神经网络(ANN)培训场景的非混凝土性带来了内在的优化困难。 虽然传统的反向分析梯度梯度下降算法(SGD)及其变方在某些情况中是有效的,但它们可能陷入虚幻的地方迷宫,对初始化和超光度十分敏感。 最近的工作显示,使用 ReLU 激活的 ANN 培训可以重塑成一个螺旋程序, 给全球优化可解释的ANNS带来希望。 然而, 天真地解决了Convex 培训配方有指数性的复杂性, 甚至近似超强的螺旋梯度需要立方时间。 在这项工作中,我们描述这一近似值的质量,并开发两种高效的算法,用全球趋同保证来训练ANNNIS 。第一个算法以交替的乘法(AMMM)为基础。它能解决准确的 convex 配方和近似的对应方。 线性全球趋同往往能以高的精确度获得解决方案。 当完成估计的配方时, 一级培训的精度的精度将不精确度应用不精确度调整后, 训练的精度是二次演化的精度将A。 最精确的演化的算法。 它的基础, 以“ 基础的精细化的精细化的演制成, 。