Recent numerical experiments have demonstrated that the choice of optimization geometry used during training can impact generalization performance when learning expressive nonlinear model classes such as deep neural networks. These observations have important implications for modern deep learning but remain poorly understood due to the difficulty of the associated nonconvex optimization problem. Towards an understanding of this phenomenon, we analyze a family of pseudogradient methods for learning generalized linear models under the square loss - a simplified problem containing both nonlinearity in the model parameters and nonconvexity of the optimization which admits a single neuron as a special case. We prove non-asymptotic bounds on the generalization error that sharply characterize how the interplay between the optimization geometry and the feature space geometry sets the out-of-sample performance of the learned model. Experimentally, selecting the optimization geometry as suggested by our theory leads to improved performance in generalized linear model estimation problems such as nonlinear and nonconvex variants of sparse vector recovery and low-rank matrix sensing.
翻译:最近的数字实验表明,在培训期间选择优化几何方法,在学习深神经网络等显性非线性模型类时,可以影响一般化表现。这些观察对现代深层学习有重要影响,但由于相关的非convex优化问题的困难,仍然不能很好地理解。为了了解这一现象,我们分析了一套假渐变方法,用于在平方损失下学习通用线性模型,这是一个简化的问题,其中既包括模型参数的非线性,也包括优化的非相容性,其中将单个神经元作为一个特例。我们证明,在一般化错误上存在非无防线性界限,这种错误明显地说明了优化几何与地貌空间几何之间的相互作用如何决定了所学模型的外表性表现。实验性,按照我们理论的建议选择优化几何方法,可以改进一般线性模型估计问题的性能,例如稀病媒恢复和低级矩阵感测的非线性和非线性变体。