Why do brains have inhibitory connections? Why do deep networks have negative weights? We believe representing functions is the primary role of both (i) the brain in natural intelligence, and (ii) deep networks in artificial intelligence. Our answer to why there are inhibitory/negative weights is: to learn more functions. We prove that, in the absence of negative weights, neural networks with non-decreasing activation functions are not universal approximators. While this may be an intuitive result to some, to the best of our knowledge, there is no formal theory, in either machine learning or neuroscience, that demonstrates why negative weights are crucial in the context of representation capacity. Further, we provide insights on the geometric properties of the representation space that non-negative deep networks cannot represent. We expect these insights will yield a deeper understanding of more sophisticated inductive priors imposed on the distribution of weights that lead to more efficient biological and machine learning.
翻译:为什么大脑有抑制性连接?为什么深层网络有负重?我们认为,代表功能的功能是(一) 自然智能中的大脑和(二) 人工智能中的深层网络的主要作用。我们对为什么存在抑制性/负重的答案是:学习更多的功能。我们证明,在没有负重的情况下,具有非降压激活功能的神经网络不是普遍的近似体。根据我们所知,这或许是一些人的直觉结果,但在机器学习或神经科学中,没有正式的理论可以证明负重对于代表能力至关重要的原因。此外,我们还就非负深层网络无法代表的表示空间的几何特征提供了见解。我们期望这些洞察力能够更深入地了解在分配导致更有效生物和机器学习的重量时所强加的更精密的感应征前程。</s>