Deep learning (DL) can arguably achieve superhuman performance in many real-world domains but at the cost of unsustainably high energy levels. We hypothesise that the fundamental problem lies in its intrinsic dependence on simplified 'point' neurons that inherently maximise the transmission of information irrespective of whether the information is relevant to other neurons or for the long-term benefit of the whole network. This leads to unnecessary neural firing and conflicting messages to higher perceptual layers, which makes DL energy inefficient and hard to train. We can circumvent this limitation of DL by mimicking a context-sensitive two-point neocortical neuron that at one point receives input from diverse neurons as context to amplify and suppress the transmission of coherent and incoherent feedforward (FF) information received at the other point, respectively. We show that a deep network composed of such local processors seeks to maximise agreement between the active neurons, thus restricting the transmission of conflicting information to higher levels and reducing the amount of neural activity required to process large amounts of heterogeneous real-world data. As shown to be far more effective and efficient than current forms of DL, this two-point neuron study offers a step-change in transforming the cellular foundations of deep network architectures.
翻译:深层学习(DL) 可以说在许多现实世界领域可以实现超人性业绩,但代价不可持续高的能源水平。我们假设,根本问题在于对简化的“点”神经元的内在依赖,这种神经元的内在依赖必然使信息传输最大化,而不论信息是否与其他神经元相关,还是对整个网络的长期利益。这导致神经不必要地向高感官层发射和相互冲突的信息,使DL的能源效率低下和难以培训。我们可以通过模仿一种环境敏感、两点神经神经神经元,在某一点接收不同神经元的投入,作为扩大和抑制在另一点分别收到的连贯和不连贯的进化(FF)信息的传输的背景,来规避DL的这种限制。我们表明,由这些本地处理器组成的深层网络力求在活跃神经元之间达成最大程度的一致,从而将相互矛盾的信息传送到更高的水平,并减少处理大量不同现实世界数据所需的神经活动的数量。在改变DL的深层结构中显示,这种系统结构的阶段变化要比目前形式更加有效和高效。