Many supervised machine learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many mathematical guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain. In this review paper, we consider two-layer neural networks with homogeneous activation functions where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived.
翻译:许多受监督的机器学习方法自然被描绘成优化问题。 对于其参数线性预测模型来说,这往往会导致存在许多数学保证的曲线问题。神经网络等参数中非线性模型导致了非曲线性优化问题,而对于非曲线性优化问题,难以获得保证。在本审查文件中,我们考虑了具有同质激活功能的两层神经网络,其中隐藏神经元的数量往往无穷无尽,并表明如何得出质量趋同保证。