The training of two-layer neural networks with nonlinear activation functions is an important non-convex optimization problem with numerous applications and promising performance in layerwise deep learning. In this paper, we develop exact convex optimization formulations for two-layer neural networks with second degree polynomial activations based on semidefinite programming. Remarkably, we show that semidefinite lifting is always exact and therefore computational complexity for global optimization is polynomial in the input dimension and sample size for all input data. The developed convex formulations are proven to achieve the same global optimal solution set as their non-convex counterparts. More specifically, the globally optimal two-layer neural network with polynomial activations can be found by solving a semidefinite program (SDP) and decomposing the solution using a procedure we call Neural Decomposition. Moreover, the choice of regularizers plays a crucial role in the computational tractability of neural network training. We show that the standard weight decay regularization formulation is NP-hard, whereas other simple convex penalties render the problem tractable in polynomial time via convex programming. We extend the results beyond the fully connected architecture to different neural network architectures including networks with vector outputs and convolutional architectures with pooling. We provide extensive numerical simulations showing that the standard backpropagation approach often fails to achieve the global optimum of the training loss. The proposed approach is significantly faster to obtain better test accuracy compared to the standard backpropagation procedure.
翻译:培训具有非线性激活功能的两层神经网络是一个重要的非康维克斯优化优化问题,有很多应用程序,并且具有高层次深层学习的有前景的性能。 在本文中,我们为两层神经网络开发精确的 convex优化配方,以半非线性编程为基础,使用二度多式激活。 值得注意的是,我们显示半不定期提升总是精确的,因此,全球优化的计算复杂性在所有投入数据输入度和样本尺寸方面是多式的。 发达的 convex 配方被证明能够实现与非趋同的对应方相同的全球最佳解决方案。 更具体地说,通过解决一个半definite 程序(SDP) 来为双层神经性神经性神经网络开发精确化配方。 此外, 调整制器的选择在计算神经网络方法的回缩度培训方法的计算可度方面起着关键的作用。 我们显示标准重变压后正式的配置程序是硬性, 而其他简单的调制式惩罚使得多级的两层神经神经网络 能够通过不同的测试结构 实现问题, 我们通过正式的升级的模拟编程编程结构 向不同的编程, 向不同的编程, 向不同的编程 向不同的编程 向不同的编程 向不同的编程 向不同的编程结构提供不同的编程 向不同的编程, 向不同的编程 向不同的编程 向不同的编程,包括制。