Why do networks have negative weights at all? The answer is: to learn more functions. We mathematically prove that deep neural networks with all non-negative weights are not universal approximators. This fundamental result is assumed by much of the deep learning literature without previously proving the result and demonstrating its necessity.
翻译:为什么网络有负权重呢?答案是:学习更多的功能。我们用数学证明,具有所有非负权重的深神经网络并不是通用的近似体。这一根本结果被大部分深层次的学习文献所假定,而以前并没有证明结果,也没有证明其必要性。