Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience and machine intelligence. We train thousands of recurrent neural networks on a working memory task and then perform dynamical systems analysis on the ensuing optimized networks, wherein we find that four distinct dynamical mechanisms can emerge. In particular, we show the prevalence of a mechanism in which memories are encoded along slow stable manifolds in the network state space, leading to a phasic neuronal activation profile during memory periods. In contrast to mechanisms in which memories are directly encoded at stable attractors, these networks naturally forget stimuli over time. Despite this seeming functional disadvantage, they are more efficient in terms of how they leverage their attractor landscape and paradoxically, are considerably more robust to noise. Our results provide new dynamical hypotheses regarding how working memory function is encoded in both natural and artificial neural networks.
翻译:工作记忆是一种认知功能,它涉及在很短的时间内存储和操纵潜在信息,因此对于根据具体情况进行计算至关重要。 在这里,我们使用自上而下的模式方法来检查工作记忆的网络机制,这是一个神经科学和机器智能方面的复杂问题和核心研究课题。 我们训练了数千个经常性神经网络进行工作记忆任务,然后对随之而来的优化网络进行动态系统分析,我们发现可以出现四个截然不同的动态机制。特别是,我们展示了一种机制的普及性,在网络状态空间内将记忆与缓慢的稳定元编码在一起,从而导致在记忆期间形成一个歇斯理神经活化剖面。与在稳定的吸引器中直接编码记忆的机制相反,这些网络自然会随着时间的推移忘记刺激性。尽管这种功能上的缺点似乎比较明显,但它们在如何利用其吸引力和自相矛盾地对噪音加以利用方面效率要高得多。 我们的结果提供了新的动态假设,说明自然和人工神经网络如何对工作记忆功能进行编码。