Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradient-based techniques, and show that stability-constrained "networks of networks" can perform well on challenging sequential-processing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.
翻译:经常性神经网络(RNNs)在神经科学中被广泛用作当地神经活动的模型。 单个神经网络的许多特性在理论上都有很好的特点,但实验性神经科学已经朝着研究多个互动领域的方向发展,并且需要同样扩展RNN理论。 我们对这个问题采取建设性的做法,利用非线性控制理论和机器学习工具来描述稳定的神经网络的组合本身何时会稳定。 重要的是,我们得出能够让互动的神经网络之间大量反馈连接的条件。 我们用基于梯度的技术为这些条件进行参数化,以方便优化,并表明受稳定限制的“网络网络网络网络网络”在挑战连续处理基准任务方面可以很好地发挥作用。 总之,我们的成果为大脑中分布式模块功能的理解提供了原则性方法。