Depth separation results propose a possible theoretical explanation for the benefits of deep neural networks over shallower architectures, establishing that the former possess superior approximation capabilities. However, there are no known results in which the deeper architecture leverages this advantage into a provable optimization guarantee. We prove that when the data are generated by a distribution with radial symmetry which satisfies some mild assumptions, gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations, and where the hidden layer is held fixed throughout training. Since it is known that ball indicators are hard to approximate with respect to a certain heavy-tailed distribution when using depth 2 networks with a single layer of non-linearities (Safran and Shamir, 2017), this establishes what is to the best of our knowledge, the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice. Our proof technique relies on a random features approach which reduces the problem to learning with a single neuron, where new tools are required to show the convergence of gradient descent when the distribution of the data is heavy-tailed.
翻译:深度分离结果为深海神经网络对浅层建筑的好处提供了可能的理论解释,确定前者拥有超强近似能力。然而,深层建筑将这一优势运用到可实现的最佳保障中,没有已知的结果。我们证明,当数据通过放射对称分布产生,满足了某些温和假设,梯度下降可以有效学习球指标功能,使用深2神经网络,具有两层的模拟激活,并且在整个培训过程中隐藏层都处于固定状态。由于已知在使用具有单一非线性层的深度2网络(Safran和Shamir,2017年)时,球指标很难估计到某种重尾部分布,这就确定了我们知识中的最佳部分,即以最优化为基础的第一个分离结果,即强型结构的近似效益在实践中可以明显显现出来。我们的证据技术依赖于随机特征方法,这种方法可以减少与单一神经的学习问题,需要新的工具来显示在数据分布严重尾部时梯度下降的趋同。