Why do brains have inhibitory connections? Neuroscientists may answer: to balance excitatory connections, to memorize, to decide, to avoid constant seizure, and many more. There seem to be many function-specific stories for the necessity of inhibitory connections. However, in its most general form, there lacks a theoretical result on why brains have inhibitory connections. Leveraging deep neural networks (DNNs), a well-established model for the brain, we ask: why do networks have negative weights? Our answer is: to learn more functions. We prove that, in the absence of negative weights, neural networks are not universal approximators. Further, we provide insights on the geometric properties of the representation space that non-negative DNNs cannot represent. While this may be an intuitive result, to the best of our knowledge, there is no formal theory, in neither machine learning nor neuroscience literature, that demonstrates why negative weights are crucial in the context of representation capacity. Our result provides the first theoretical justification on why inhibitory connections in brains and negative weights in DNNs are important for networks to represent all functions.
翻译:大脑为何有抑制性连接? 神经科学家可以回答: 平衡刺激性连接, 记忆, 以决定, 避免持续扣押, 以及更多的其他。 似乎有许多特定功能的故事, 需要抑制性连接。 但是, 最一般地说, 大脑为什么有抑制性连接缺乏理论结果。 利用深神经网络( DNNs), 大脑的完善模型, 我们问: 网络为什么有负重? 我们的答案是: 学习更多的功能。 我们证明, 在没有负重的情况下, 神经网络不是普遍的替代器。 此外, 我们提供非负式 DNNS 无法代表的显示空间的几何物理特性的洞察力。 虽然这可能是一个直觉的结果, 但根据我们的知识, 无论是机器学习还是神经科学文献, 都没有正式的理论来证明负重在代表能力中至关重要的原因。 我们的结果提供了第一个理论理由,说明为什么 DNNS 中大脑的抑制性连接和负重对于网络代表所有功能都很重要。