Neural networks are nowadays highly successful despite strong hardness results. The existing hardness results focus on the network architecture, and assume that the network's weights are arbitrary. A natural approach to settle the discrepancy is to assume that the network's weights are "well-behaved" and posses some generic properties that may allow efficient learning. This approach is supported by the intuition that the weights in real-world networks are not arbitrary, but exhibit some "random-like" properties with respect to some "natural" distributions. We prove negative results in this regard, and show that for depth-$2$ networks, and many "natural" weights distributions such as the normal and the uniform distribution, most networks are hard to learn. Namely, there is no efficient learning algorithm that is provably successful for most weights, and every input distribution. It implies that there is no generic property that holds with high probability in such random networks and allows efficient learning.
翻译:当今的神经网络非常成功, 尽管有很强的硬性结果。 现有的硬性结果集中在网络结构上, 并假定网络的重量是任意的。 解决差异的自然方法就是假设网络的重量是“ 良好行为”, 并拥有一些可以有效学习的通用属性。 这个方法得到以下直觉的支持: 现实世界网络的重量不是任意的, 但是在某些“ 自然” 分布上表现出一些“ 类似” 的属性。 我们证明在这方面的结果是负面的, 并表明对于深度 - 2美元网络和许多“自然” 重量分布, 如正常和统一分布, 多数网络很难学习。 也就是说, 没有有效的学习算法, 对于大多数重量和每种输入分布来说都是非常成功的。 这意味着在这种随机网络中不存在具有高度可能性的通用属性, 并且允许有效的学习。