Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century. The majority of results in AI thus far have been achieved using deep neural networks trained with a learning algorithm called error backpropagation, always considered biologically implausible. To this end, recent works have studied learning algorithms for deep neural networks inspired by the neurosciences. One such theory, called predictive coding (PC), has shown promising properties that make it potentially valuable for the machine learning community: it can model information processing in different areas of the brain, can be used in control and robotics, has a solid mathematical foundation in variational inference, and performs its computations asynchronously. Inspired by such properties, works that propose novel PC-like algorithms are starting to be present in multiple sub-fields of machine learning and AI at large. Here, we survey such efforts by first providing a broad overview of the history of PC to provide common ground for the understanding of the recent developments, then by describing current efforts and results, and concluding with a large discussion of possible implications and ways forward.
翻译:人工智能正迅速成为本世纪的关键技术之一。迄今为止,人工智能领域的大多数成果都是通过使用一种被称为误差反向传播的学习算法训练的深度神经网络实现的,而该算法一直被认为在生物学上不可信。为此,近期研究开始探索受神经科学启发的深度神经网络学习算法。其中一种称为预测编码的理论已展现出有前景的特性,使其对机器学习社区具有潜在价值:它能够模拟大脑不同区域的信息处理过程,可用于控制与机器人学,在变分推断方面具有坚实的数学基础,并以异步方式执行计算。受这些特性启发,提出新型类预测编码算法的研究已开始出现在机器学习乃至整个人工智能的多个子领域。本文首先通过概述预测编码的历史为理解近期进展奠定共同基础,随后描述当前的研究成果与发现,最后对潜在影响及未来发展方向展开广泛讨论。