We develop fast algorithms and robust software for convex optimization of two-layer neural networks with ReLU activation functions. Our work leverages a convex reformulation of the standard weight-decay penalized training problem as a set of group-$\ell_1$-regularized data-local models, where locality is enforced by polyhedral cone constraints. In the special case of zero-regularization, we show that this problem is exactly equivalent to unconstrained optimization of a convex "gated ReLU" network. For problems with non-zero regularization, we show that convex gated ReLU models obtain data-dependent approximation bounds for the ReLU training problem. To optimize the convex reformulations, we develop an accelerated proximal gradient method and a practical augmented Lagrangian solver. We show that these approaches are faster than standard training heuristics for the non-convex problem, such as SGD, and outperform commercial interior-point solvers. Experimentally, we verify our theoretical results, explore the group-$\ell_1$ regularization path, and scale convex optimization for neural networks to image classification on MNIST and CIFAR-10.
翻译:我们开发了快速算法和强大的软件,以优化双层神经网络,并启用 ReLU 激活功能。 我们的工作将标准重量下降约束的培训问题作为一组组合- $\ ell_ $1美元固定化的数据本地模型,其中位置由多面锥体调节限制强制实施。 在零常规化的特殊情况下, 我们显示, 这个问题完全相当于对一个“ gated ReLU” 的曲线“ givex” 网络进行不受限制的优化。 对于非零常规化的问题, 我们显示, convex Gated ReLU 模型获得了RELU 培训问题的数据依赖的近似界限。 为了优化矩形重整, 我们开发了一个加速的准轴梯度方法和一个实际增强的拉格朗吉亚求解器。 我们显示, 这些方法比标准的培训高, 如 SGD, 和超常规的商业内点解决方案。 实验中, 我们验证了我们的理论结果, 探索 Group- $\ 1 mAR 常规化路径, 和 CRAFAR 图像优化 。