We study the learning dynamics and the representations emerging in Recurrent Neural Networks trained to integrate one or multiple temporal signals. Combining analytical and numerical investigations, we characterize the conditions under which a RNN with n neurons learns to integrate D(n) scalar signals of arbitrary duration. We show, both for linear and ReLU neurons, that its internal state lives close to a D-dimensional manifold, whose shape is related to the activation function. Each neuron therefore carries, to various degrees, information about the value of all integrals. We discuss the deep analogy between our results and the concept of mixed selectivity forged by computational neuroscientists to interpret cortical recordings.
翻译:我们研究了经常性神经网络中为整合一个或多个时间信号而培训的经常性神经网络中出现的学习动态和表现。结合分析和数字调查,我们确定一个带有神经元的RNN学习集成D(n)任意持续时间的星标信号的条件。我们对线性神经元和RELU神经元显示,其内部状态接近D-维元,其形状与激活功能有关。因此,每个神经元在不同程度上包含关于所有整体值的信息。我们讨论了我们的结果与由计算神经科学家为解释圆形录音而形成的混合选择性概念之间的深层次比喻。