This perspective piece came about through the Generative Adversarial Collaboration (GAC) series of workshops organized by the Computational Cognitive Neuroscience (CCN) conference in 2020. We brought together a number of experts from the field of theoretical neuroscience to debate emerging issues in our understanding of how learning is implemented in biological recurrent neural networks. Here, we will give a brief review of the common assumptions about biological learning and the corresponding findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks commonly used in artificial intelligence. We will then outline the key issues discussed in the workshop: synaptic plasticity, neural circuits, theory-experiment divide, and objective functions. Finally, we conclude with recommendations for both theoretical and experimental neuroscientists when designing new studies that could help to bring clarity to these issues.
翻译:通过2020年计算性认知神经科学会议举办的计算性神经科学系列讲习班,我们聚集了理论神经科学领域的一些专家,以讨论我们了解生物经常性神经网络如何实施学习的新出现问题。在这里,我们将简要回顾生物学习的共同假设和实验性神经科学的相应发现,并将这些假设与人工智能常用的经常性神经网络中基于梯度的学习效率作对比。然后,我们将概述研讨会讨论的主要问题:合成造型、神经电路、理论实验性分裂和客观功能。最后,我们将在设计有助于澄清这些问题的新研究时,向理论和实验性神经科学家提出建议。