Why do brains have inhibitory connections? Why do deep networks have negative weights? There are many function-specific explanations for the necessity of inhibitory connections, including to balance excitatory connections, memorize, decide, and avoid seizures. We propose an answer from the perspective of representation capacity. We believe representing functions is the primary role of both (i) the brain in natural intelligence, and (ii) deep networks in artificial intelligence. Our answer to why there are inhibitory/negative weights is: to learn more functions. We prove that, in the absence of negative weights, neural networks are not universal approximators. While this may be an intuitive result, to the best of our knowledge, there is no formal theory, in either machine learning or neuroscience, that demonstrates why negative weights are crucial in the context of representation capacity. Further, we provide insights on the geometric properties of the representation space that non-negative deep networks cannot represent. We expect these insights will yield a deeper understanding of more sophisticated inductive priors imposed on the distribution of weights that lead to more efficient biological and machine learning.
翻译:大脑为什么有抑制性连接?为什么深层网络有抑制性连接?为什么深层网络有负重?对于抑制性连接的必要性有许多特定功能的解释,包括平衡刺激性连接、记忆、决定和避免扣押。我们从代表能力的角度提出一个答案。我们认为代表功能是(一) 自然智能中的大脑和(二) 人工智能中的深层网络的主要作用。我们对为什么有抑制性/负重的答案是:学习更多的功能。我们证明,在没有负重的情况下,神经网络不是通用的近似器。根据我们的知识,这也许是一种直观的结果,但在机器学习或神经科学中,没有正式的理论可以证明负重在代表能力方面至关重要的原因。此外,我们还就非负重深层网络无法代表的代表空间的几何特性提供了见解。我们期望这些洞察将产生更深入的理解,更深入地了解在重量分配上强加给更高效生物和机器学习的推理前。