Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradient-based techniques, and show that stability-constrained `network of networks' can perform well on challenging sequential-processing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.
翻译:经常性神经网络(RNNs)在整个神经科学中被广泛用作当地神经活动的模型。单一神经网络的许多特性在理论上都有很好的特点,但实验性神经科学已经朝着研究多个互动领域的方向发展,并且需要同样扩展RNN理论。我们对这个问题采取建设性的做法,利用非线性控制理论和机器学习工具来描述稳定的RNS组合本身何时会稳定。重要的是,我们从中得出能够让互动的RNNs之间大量反馈连接的条件。我们用梯度技术为这些条件设定参数,以方便优化这些条件,并表明受稳定限制的“网络网络网络网络网络”在挑战连续处理基准任务方面可以很好地发挥作用。我们的成果总之,为在大脑中分配和模块功能提供了一种原则性的理解方法。