Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the context of neural network models for neuroscience, concerns have been raised about model intelligibility, and how they relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are causally responsible for that behavior. In biological systems, many of these dependencies are naturally "top-down": ethological imperatives interact with evolutionary and developmental constraints under natural selection. We describe how the optimization techniques used to construct NN models capture some key aspects of these dependencies, and thus help explain why brain systems are as they are -- because when a challenging ecologically-relevant goal is shared by a NN and the brain, it places tight constraints on the possible mechanisms exhibited in both kinds of systems. By combining two familiar modes of explanation -- one based on bottom-up mechanism (whose relation to neural network models we address in a companion paper) and the other on top-down constraints, these models illuminate brain function.
翻译:计算模型在神经科学中发挥着越来越重要的作用,突出了计算模型如何解释的哲学问题。在神经科学神经网络模型的背景下,人们对于模型的智能性以及这些模型与大脑中发现的东西的关系(如果有的话)提出了关切。我们声称,使系统能够理解的是了解其行为与因果因素之间的依赖性。在生物系统中,许多这些依赖性自然是“自上而下”的:伦理学需要与自然选择的进化和发育限制相互作用。我们描述了用来建立NNN模型的优化技术如何捕捉到这些依赖性的一些关键方面,从而帮助解释为什么大脑系统与大脑系统是相同的 -- -- 因为当一个具有挑战性、与生态相关的目标为NN和大脑所共有时,它给两种系统所展示的可能机制设置了严格的限制。通过将两种熟悉的解释模式结合起来 -- -- 一种基于自下而上而上的机制(我们用纸处理与神经网络模型的关系),另一种基于自上而下的制约,这些模型将大脑功能照亮。