We investigate 1) the rate at which refined properties of the empirical risk---in particular, gradients---converge to their population counterparts in standard non-convex learning tasks, and 2) the consequences of this convergence for optimization. Our analysis follows the tradition of norm-based capacity control. We propose vector-valued Rademacher complexities as a simple, composable, and user-friendly tool to derive dimension-free uniform convergence bounds for gradients in non-convex learning problems. As an application of our techniques, we give a new analysis of batch gradient descent methods for non-convex generalized linear models and non-convex robust regression, showing how to use any algorithm that finds approximate stationary points to obtain optimal sample complexity, even when dimension is high or possibly infinite and multiple passes over the dataset are allowed. Moving to non-smooth models we show----in contrast to the smooth case---that even for a single ReLU it is not possible to obtain dimension-independent convergence rates for gradients in the worst case. On the positive side, it is still possible to obtain dimension-independent rates under a new type of distributional assumption.
翻译:我们调查了1) 实验风险(特别是梯度-梯度-)的精细特性在标准非曲线学习任务中对其人口对应方进行精细分析的速度,2) 这一趋同对优化的后果。我们的分析遵循基于规范的能力控制传统。我们建议,矢量价值的拉德马赫复杂程度是一个简单、可折射和方便用户的工具,可以得出非曲线学习问题中梯度的无维统一趋同界限。作为我们技术的应用,我们用新的分析方法,对非螺旋通用线性模型和非螺旋强回归的分批梯度梯度梯度下降方法进行分析,说明如何使用任何算法找到近似固定点以获得最佳的样本复杂性,即使尺寸高或可能无限,而且允许在数据集上多次通过。我们向非线性模型展示了非线性统一趋同率,即使单项RELU也不可能在最坏的情况下获得梯度依赖维度的趋同率。在正方面,仍然有可能在新类型分布下获得维度独立率的假设。