Backpropagation is widely used to train artificial neural networks, but its relationship to synaptic plasticity in the brain is unknown. Some biological models of backpropagation rely on feedback projections that are symmetric with feedforward connections, but experiments do not corroborate the existence of such symmetric backward connectivity. Random feedback alignment offers an alternative model in which errors are propagated backward through fixed, random backward connections. This approach successfully trains shallow models, but learns slowly and does not perform well with deeper models or online learning. In this study, we develop a novel meta-plasticity approach to discover interpretable, biologically plausible plasticity rules that improve online learning performance with fixed random feedback connections. The resulting plasticity rules show improved online training of deep models in the low data regime. Our results highlight the potential of meta-plasticity to discover effective, interpretable learning rules satisfying biological constraints.
翻译:人造神经网络的反向分析被广泛用于培养人工神经网络,但它与大脑中合成可塑性的关系还未知。一些反向分析的生物模型依赖于对向连接的反馈预测,但实验并不能证实这种对称后向连接的存在。随机反馈调整提供了一种替代模型,其中错误通过固定的随机后向连接向后传播。这种方法成功地培训了浅层模型,但学习缓慢,而且与更深的模型或在线学习不顺利。在这个研究中,我们开发了一种新颖的元可塑性方法,以发现可解释的、生物上可信的可塑性规则,用固定随机反馈连接改进在线学习性能。由此产生的可塑性规则显示在低数据系统中对深层模型进行更好的在线培训。我们的结果凸显了元性发现有效、可解释的学习规则的潜力,以满足生物限制。