The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should stimulate cooperation between AI research and theory-driven computational neuroscience (CN). To motivate a brain basis of neural computation, we present a dynamical view of intelligence from which we elaborate concepts of sparsity in network structure, temporal dynamics, and interactive learning. In particular, we suggest that temporal dynamics, as expressed through neural synchrony, nested oscillations, and flexible sequences, provide a rich computational layer for reading and updating hierarchical models distributed in long-term memory networks. Moreover, embracing agent-centered paradigms in AI and CN will accelerate our understanding of the complex dynamics and behaviors that build useful world models. A convergence of AI/CN theories and objectives will reveal dynamical principles of intelligence for brains and engineered learning systems. This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
翻译:现代人工智能(AI)深层神经网尚未达到生物智能的界定特征,包括抽象、因果学习和能源效率。虽然向较大模型的推广提高了当前应用的性能,但更像大脑的能力可能需要新的理论、模式和方法来设计人工学习系统。在这里,我们争辩说,从大脑中重新评估洞见的机会应当激励AI研究与理论驱动的计算神经科学(CN)之间的合作。为了激励神经计算大脑基础,我们呈现了一个动态的智慧观点,从中我们详细阐述网络结构、时间动态和互动学习的宽度概念。特别是,我们建议,通过神经同步、嵌套振荡和灵活序列表达的时间动态,为阅读和更新长期记忆网络中分布的等级模型提供一个丰富的计算层。此外,在AI和CN中采用代理中心模式将加快我们对构建有用世界模型的复杂动态和行为的理解。AI/CN理论和目标汇合在一起将揭示大脑和工程学习系统的动态情报原则。我们每年一度的IRI举措研讨会激励了这一条款。