In this work, we propose a generalization of the forward-forward (FF) algorithm that we call the predictive forward-forward (PFF) algorithm. Specifically, we design a dynamic, recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit, combining elements of predictive coding, an emerging and viable neurobiological process theory of cortical function, with the forward-forward adaptation scheme. Furthermore, PFF efficiently learns to propagate learning signals and updates synapses with forward passes only, eliminating some of the key structural and computational constraints imposed by a backprop-based scheme. Besides computational advantages, the PFF process could be further useful for understanding the learning mechanisms behind biological neurons that make use of local (and global) signals despite missing feedback connections. We run several experiments on image data and demonstrate that the PFF procedure works as well as backprop, offering a promising brain-inspired algorithm for classifying, reconstructing, and synthesizing data patterns. As a result, our approach presents further evidence of the promise afforded by backprop-alternative credit assignment algorithms within the context of brain-inspired computing.
翻译:在这项工作中,我们建议对前向(FF)算法进行概括化,我们称之为前向(PFF)算法,我们称之为前向(PFF)算法。具体地说,我们设计一个动态的、经常性的神经系统,与代表电路同时并肩学习定向基因变异电路,结合预测编码、新出现和可行的神经生物过程功能理论以及前向适应计划等要素。此外,PFF有效地学会了传播学习信号和以前向传递方式更新突触,消除了后向(PFF)计划施加的一些关键的结构性和计算限制。除了计算优势外,PFF过程还可以进一步有助于理解生物神经学的学习机制,尽管缺少反馈连接,但利用当地(和全球)信号。我们在图像数据上进行了一些实验,并证明PFF程序既有效又不恰当,为分类、重建和合成数据模式提供了有希望的大脑激励算法。结果,我们的方法进一步证明了在脑错动的计算中采用后替代信用分配算法所提供的许诺。