The brain is not only constrained by energy needed to fuel computation, but it is also constrained by energy needed to form memories. Experiments have shown that learning simple conditioning tasks already carries a significant metabolic cost. Yet, learning a task like MNIST to 95% accuracy appears to require at least 10^{8} synaptic updates. Therefore the brain has likely evolved to be able to learn using as little energy as possible. We explored the energy required for learning in feedforward neural networks. Based on a parsimonious energy model, we propose two plasticity restricting algorithms that save energy: 1) only modify synapses with large updates, and 2) restrict plasticity to subsets of synapses that form a path through the network. Combining these two methods leads to substantial energy savings while only incurring a small increase in learning time. In biology networks are often much larger than the task requires. In particular in that case, large savings can be achieved. Thus competitively restricting plasticity helps to save metabolic energy associated to synaptic plasticity. The results might lead to a better understanding of biological plasticity and a better match between artificial and biological learning. Moreover, the algorithms might also benefit hardware because in electronics memory storage is energetically costly as well.
翻译:大脑不仅受计算所需的能量限制,还受形成记忆所需的能量限制。实验表明,学习简单的条件任务已经带来了显着的代谢成本。然而,要将任务MNIST学习到95%的准确率,应该需要至少10^{8}次突触更新。因此,大脑很可能进化出能够使用尽可能少的能量来学习的能力。我们探讨了前向神经网络中学习所需的能量。基于简洁的能量模型,我们提出了两种限制可塑性的算法来节省能量: 1) 只修改大量更新的突触,2) 限制可塑性在网络中形成一条路径的突触子集上。将这两种方法结合起来,在只增加少量学习时间的情况下,可以大幅节省能量。在生物学中,网络通常比任务所需的要大得多。特别是在这种情况下,可以实现大量的节能。因此,通过竞争性地限制可塑性,可以帮助节省与突触可塑性相关的代谢能量。这些结果可能有助于更好地理解生物可塑性和人工和生物学习之间的更好匹配。此外,这些算法也可能有益于硬件,因为在电子学中,存储器的存储也是具有能量成本的。