Activity difference based learning algorithms-such as contrastive Hebbian learning and equilibrium propagation-have been proposed as biologically plausible alternatives to error back-propagation. However, on traditional digital chips these algorithms suffer from having to solve a costly inference problem twice, making these approaches more than two orders of magnitude slower than back-propagation. In the analog realm equilibrium propagation may be promising for fast and energy efficient learning, but states still need to be inferred and stored twice. Inspired by lifted neural networks and compartmental neuron models we propose a simple energy based compartmental neuron model, termed dual propagation, in which each neuron is a dyad with two intrinsic states. At inference time these intrinsic states encode the error/activity duality through their difference and their mean respectively. The advantage of this method is that only a single inference phase is needed and that inference can be solved in layerwise closed-form. Experimentally we show on common computer vision datasets, including Imagenet32x32, that dual propagation performs equivalently to back-propagation both in terms of accuracy and runtime.
翻译:基于活动差异的学习算法,如对比的赫比亚学习和平衡传播等,被建议作为生物上可信的替代方法,以取代错误反反向传播。然而,在传统的数字芯片上,这些算法却得解决昂贵的推论问题两次,使这些方法的规模比反向分析慢了两个以上。在模拟领域,均衡传播可能有利于快速和节能学习,但国家仍然需要两次推断和储存。在提升的神经网络和分机神经模型的启发下,我们提出了一种简单的基于能量的分包神经模型,称为双向传播,其中每个神经元都是两个内在状态的缺陷。在推论期间,这些内在状态通过其差异和平均值将错误/活动双重性编码。这种方法的优点是,只需要一个单一的推论阶段,就可以在高层封闭式中解析。我们实验性地展示了共同的计算机视觉数据集,包括Mimnet32x32,即双向传播在精确和运行时都相当于反向调整。