Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and its approximation performance. The recurrent part of these networks is not trained (e.g. via gradient descent), making them appealing for analytical studies, raising the interest of a vast community of researcher spanning from dynamical systems to neuroscience. It emerges that, even in the simple linear case, the working principle of these networks is not fully understood and the applied research is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows one to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics: in particular, its rank can be used as an input-indepedent measure of the memory of the network. Using the proposed approach, it is possible to compare different architectures and explain why a cyclic topology achieves favourable results.
翻译:储量计算是设计经常性神经网络的流行方法,因为其培训简便和近似性能。这些网络的经常部分没有经过培训(例如,通过梯度下降),因此它们呼吁进行分析研究,提高从动态系统到神经科学的广大研究人员的兴趣。即使在简单的线性案例中,这些网络的工作原则也没有得到充分理解,应用研究通常由超自然学驱动。提出了对这些网络动态的新分析,从而使人们能够利用可控性矩阵来表达国家演变。这种矩阵的特征是网络动态的显著特征:特别是其等级可以用作网络记忆的输入加速度测量。使用拟议的方法,可以比较不同的结构,并解释循环表层为何会取得有利的结果。