Neuroscience-inspired models, such as predictive coding, have the potential to play an important role in the future of machine intelligence. However, they are not yet used in industrial applications due to some limitations, such as the lack of efficiency. In this work, we address this by proposing incremental predictive coding (iPC), a variation of the original framework derived from the incremental expectation maximization algorithm, where every operation can be performed in parallel without external control. We show both theoretically and empirically that iPC is much faster than the original algorithm originally developed by Rao and Ballard, while maintaining performance comparable to backpropagation in image classification tasks. This work impacts several areas, has general applications in computational neuroscience and machine learning, and specific applications in scenarios where automatization and parallelization are important, such as distributed computing and implementations of deep learning models on analog and neuromorphic chips.
翻译:预测编码等由神经科学启发的模型有可能在未来机器智能中发挥重要作用,但由于缺乏效率等一些限制,这些模型尚未用于工业应用。在这项工作中,我们通过提出递增预测编码(iPC)来解决这个问题,这是从递增预期最大化算法产生的原始框架的变异,其中每项操作都可以在没有外部控制的情况下平行进行。我们从理论上和经验上表明,iPC比Rao和Ballard最初开发的原始算法要快得多,同时在图像分类任务中保持与反向调整相近的性能。这项工作影响到几个领域,在计算神经科学和机器学习方面有一般应用,在自动和平行化很重要的情景中也有具体应用,例如分布式计算和在模拟和神经形态芯片上实施深层学习模型。