Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP, including architectures with distinct forward and backward connections. These results highlight EP as a scalable approach to compute error gradients in deep neural networks, thereby motivating its hardware implementation.
翻译:平衡促进(EP)是生物激励的 " 穿越时间反向推进 " (BPTT)的对应物,由于它具有很强的理论保障和学习规则的空间,它促进了专门用于学习的节能硬件的设计,但在实践中,EP并不比MNIST更难进行视觉任务。在这项工作中,我们表明,使用有限裸体所固有的EP梯度估计偏差是造成这一现象的原因,取消这一估计可以使EP进行深层次的CVNet培训,包括具有明显前向和后向连接的建筑。这些结果突出表明EP是计算深层神经网络误差梯度的可伸缩方法,从而推动其硬件实施。