There is ample neurobiological evidence that context-sensitive neocortical neurons use their apical inputs as context to amplify the transmission of coherent feedforward (FF) inputs. However, it has not been demonstrated until now how this known mechanism can provide useful neural computation. Here we show for the first time that the processing and learning capabilities of this form of neural information processing are well-matched to the abilities of mammalian neocortex. Specifically, we show that a network composed of such local processors restricts the transmission of conflicting information to higher levels and greatly reduces the amount of activity required to process large amounts of heterogeneous real-world data e.g., when processing audiovisual speech, these local processors use seen lip movements to selectively amplify FF transmission of the auditory information that those movements generate and vice versa. As this mechanism is shown to be far more effective and efficient than the best available forms of deep neural nets, it offers a step-change in understanding the brain's mysterious energy-saving mechanism and inspires advances in designing enhanced forms of biologically plausible machine learning algorithms.
翻译:有大量的神经生物学证据表明,环境敏感的新园艺神经元利用它们光学输入来扩大连贯的饲料向前(FF)输入的传输。然而,直到现在还没有证明这一已知机制如何能提供有用的神经计算。在这里,我们第一次表明,这种神经信息处理方式的处理和学习能力与哺乳动物新神经皮层的能力非常相匹配。具体地说,我们表明,由这种本地处理器组成的网络限制将相互矛盾的信息传送到更高的水平,并大大降低了处理大量不同真实世界数据所需的活动量,例如,在处理视听演讲时,这些本地处理器利用所看到的嘴唇移动来有选择地扩大这些运动产生的、反之亦然的可信信息的传输。由于这一机制证明远比现有最佳的深神经网形式更有效、效率更高得多,因此在理解大脑神秘的节能机制方面有了逐步的变化,并激励了在设计强化形式的生物上可信的机器学习算法方面的进步。