A central question in computational neuroscience is how structure determines function in neural networks. The emerging high-quality large-scale connectomic datasets raise the question of what general functional principles can be gleaned from structural information such as the distribution of excitatory/inhibitory synapse types and the distribution of synaptic weights. Motivated by this question, we developed a statistical mechanical theory of learning in neural networks that incorporates structural information as constraints. We derived an analytical solution for the memory capacity of the perceptron, a basic feedforward model of supervised learning, with constraint on the distribution of its weights. Our theory predicts that the reduction in capacity due to the constrained weight-distribution is related to the Wasserstein distance between the imposed distribution and that of the standard normal distribution. To test the theoretical predictions, we use optimal transport theory and information geometry to develop an SGD-based algorithm to find weights that simultaneously learn the input-output task and satisfy the distribution constraint. We show that training in our algorithm can be interpreted as geodesic flows in the Wasserstein space of probability distributions. We further developed a statistical mechanical theory for teacher-student perceptron rule learning and ask for the best way for the student to incorporate prior knowledge of the rule. Our theory shows that it is beneficial for the learner to adopt different prior weight distributions during learning, and shows that distribution-constrained learning outperforms unconstrained and sign-constrained learning. Our theory and algorithm provide novel strategies for incorporating prior knowledge about weights into learning, and reveal a powerful connection between structure and function in neural networks.
翻译:计算神经科学的一个中心问题是,在计算神经科学中,结构如何决定神经网络的功能。正在形成的高质量大型连系数据集提出了这样一个问题:从结构信息,例如刺激性/内膜突触类型分布和标准正常分布等结构信息中,可以汲取哪些一般功能原则的削弱。由于这一问题,我们开发了神经网络学习的统计性机械理论,该理论将结构性信息作为制约因素。我们为透视器的内存能力找到一个分析解决方案,这是监督学习的基本反馈模型,其重量分布受到限制。我们的理论预测,由于重量分布受限而导致的能力下降,与强制分布和标准正常分布之间的距离有关。为了测试理论预测,我们使用最佳的运输理论和信息几何来开发一个基于 SGD 的算法,以找到同时学习输入-输出任务和满足分配制约的加权。我们的数据算法培训可以被解释为在瓦列斯坦网络中进行不测值流动,其加权分布受到制约。我们理论的理论预测,由于重量分布受到限制而导致的能力下降,而导致能力下降的原因是瓦列斯特因超重分配,我们在学习规则中学习前的机理学之前,我们通过统计学学习规则,我们进一步发展了一种计算方法。