Deep learning (DL) has big-data processing capabilities that are as good, or even better, than those of humans in many real-world domains, but at the cost of high energy requirements that may be unsustainable in some applications and of errors, that, though infrequent, can be large. We hypothesise that a fundamental weakness of DL lies in its intrinsic dependence on integrate-and-fire point neurons that maximise information transmission irrespective of whether it is relevant in the current context or not. This leads to unnecessary neural firing and to the feedforward transmission of conflicting messages, which makes learning difficult and processing energy inefficient. Here we show how to circumvent these limitations by mimicking the capabilities of context-sensitive neocortical neurons that receive input from diverse sources as a context to amplify and attenuate the transmission of relevant and irrelevant information, respectively. We demonstrate that a deep network composed of such local processors seeks to maximise agreement between the active neurons, thus restricting the transmission of conflicting information to higher levels and reducing the neural activity required to process large amounts of heterogeneous real-world data. As shown to be far more effective and efficient than current forms of DL, this two-point neuron study offers a possible step-change in transforming the cellular foundations of deep network architectures.
翻译:深度学习具有在许多真实世界领域中与人类相当甚至更好的大数据处理能力,但代价是高能耗,这可能在某些应用中是不可持续的,而且错误虽然很少但可能很大。我们假设深度学习的一个根本弱点在于其内在依赖于最大化信息传输而不管当前上下文是否相关的整合-发射点神经元,这导致不必要的神经放电,以及无关和矛盾信息的前馈传递,这使学习困难且处理能量效率低下。我们展示了如何通过模仿基于上下文的新皮层神经元的能力来避免这些局限性,其通过接收来自不同源的输入作为上下文来放大和衰减相关和无关的信息传输。我们证明了由这些局部处理器组成的深度网络寻求最大化活跃神经元之间的协议,从而将冲突信息的传输限制在更高的水平,并减少处理大量异构实际数据所需的神经活动。作为比当前深度学习形式更有效和高效的两点神经元研究,这扩展了改变深度网络架构的细胞基础的可能性。