Despite the vast empirical success of neural networks, theoretical understanding of the training procedures remains limited, especially in providing performance guarantees of testing performance due to the non-convex nature of the optimization problem. The current paper investigates an alternative approach of neural network training by reducing to another problem with convex structure -- to solve a monotone variational inequality (MVI) -- inspired by a recent work of (Juditsky & Nemirovsky, 2019). The solution to MVI can be found by computationally efficient procedures, and importantly, this leads to performance guarantee of $\ell_2$ and $\ell_{\infty}$ bounds on model recovery and prediction accuracy under the theoretical setting of training a single-layer linear neural network. In addition, we study the use of MVI for training multi-layer neural networks and propose a practical algorithm called \textit{stochastic variational inequality} (SVI), and demonstrate its applicability in training fully-connected neural networks and graph neural networks (GNN) (SVI is completely general and can be used to train other types of neural networks). We demonstrate the competitive or better performance of SVI compared to widely-used stochastic gradient descent methods on both synthetic and real network data prediction tasks regarding various performance metrics, especially in the improved efficiency in the early stage of training.
翻译:尽管神经网络取得了巨大的实证成功,但对培训程序的理论理解仍然有限,特别是由于优化问题的非软质性质,在提供性能保证方面,特别是在提供性能保证方面,由于优化问题的非软质性质,在优化问题的非硬质性能问题的性质下,特别是在提供性能保证方面。本文件探讨神经网络培训的另一种替代方法,即减少与软质结构的另一个问题 -- -- 解决单调变异性不平等(MVI) -- -- 受近期工作(Juditsky & Nemirovsky,2019年)的启发,解决单调异性不平等(MVI) -- -- 通过计算高效的程序,可以找到MVI的解决方案。这导致在模型恢复和预测准确性约束方面提供性能保障,在培训单层线性线性神经网络的理论设置下,模型恢复和预测准确性能。此外,我们研究利用MVI培训多层神经网络的实用算法,称为textit{tochatical 变异性不平等}(SVI),在培训完全连通的神经网络和直线性神经网络(GNNNNNNNN)(SVI是完全一般的,可以用来在培训其他类型的神经网络的早期效率方面培训中广泛地、特别在高度预测中,我们展示性能和高压性能和高度数据的系统上展示了各种高度预测性能性工作。