Neural network training is usually accomplished by solving a non-convex optimization problem using stochastic gradient descent. Although one optimizes over the networks parameters, the main loss function generally only depends on the realization of the neural network, i.e. the function it computes. Studying the optimization problem over the space of realizations opens up new ways to understand neural network training. In particular, usual loss functions like mean squared error and categorical cross entropy are convex on spaces of neural network realizations, which themselves are non-convex. Approximation capabilities of neural networks can be used to deal with the latter non-convexity, which allows us to establish that for sufficiently large networks local minima of a regularized optimization problem on the realization space are almost optimal. Note, however, that each realization has many different, possibly degenerate, parametrizations. In particular, a local minimum in the parametrization space needs not correspond to a local minimum in the realization space. To establish such a connection, inverse stability of the realization map is required, meaning that proximity of realizations must imply proximity of corresponding parametrizations. We present pathologies which prevent inverse stability in general, and, for shallow networks, proceed to establish a restricted space of parametrizations on which we have inverse stability w.r.t. to a Sobolev norm. Furthermore, we show that by optimizing over such restricted sets, it is still possible to learn any function which can be learned by optimization over unrestricted sets.
翻译:神经网络培训通常通过使用随机梯度梯度下降解决非convex优化问题的方式来完成。 虽然在网络参数上最优化, 但主要损失功能一般只取决于神经网络的实现, 也就是它所计算的功能。 研究实现空间的优化问题为理解神经网络培训开辟了新的途径。 特别是, 通常的损失功能, 如平均平方错误和绝对的交叉反向是神经网络实现空间的连接, 而这些空间本身也是非对立的。 神经网络的匹配能力可以用来处理后一种非兼容性, 从而使我们能够为足够大的网络建立实现空间正常优化问题的本地小型网络, 几乎是最佳的。 但是, 注意, 每一种实现空间的实现都有许多不同, 可能是退化的, 准位化。 特别是, 超位空间的本地最小值与实现空间的本地最小值并不相符。 要建立这样的连接, 相对于实现地图的相对稳定的连接, 防止实现过程的接近, 意味着实现过程的接近, 也就是说, 稳定的路径接近, 我们的深度学习 。