Reservoir computing (RC) can efficiently process time-series data by transferring the input signal to randomly connected recurrent neural networks (RNNs), which are referred to as a reservoir. The high-dimensional representation of time-series data in the reservoir significantly simplifies subsequent learning tasks. Although this simple architecture allows fast learning and facile physical implementation, the learning performance is inferior to that of other state-of-the-art RNN models. In this paper, to improve the learning ability of RC, we propose self-modulated RC (SM-RC), which extends RC by adding a self-modulation mechanism. The self-modulation mechanism is realized with two gating variables: an input gate and a reservoir gate. The input gate modulates the input signal, and the reservoir gate modulates the dynamical properties of the reservoir. We demonstrated that SM-RC can perform attention tasks where input information is retained or discarded depending on the input signal. We also found that a chaotic state emerged as a result of learning in SM-RC. This indicates that self-modulation mechanisms provide RC with qualitatively different information-processing capabilities. Furthermore, SM-RC outperformed RC in NARMA and Lorentz model tasks. In particular, SM-RC achieved a higher prediction accuracy than RC with a reservoir 10 times larger in the Lorentz model tasks. Because the SM-RC architecture only requires two additional gates, it is physically implementable as RC, providing a new direction for realizing edge AI.
翻译:储量计算(RC)可以通过将输入信号传送给随机连接的经常性神经网络(RNN)来高效处理时间序列数据,这些网络被称为储油层。储油层中时间序列数据的高度显示方式大大简化了随后的学习任务。虽然这一简单结构允许快速学习和便捷的物理执行,但学习表现优于其他最先进的RNN模型。在本文件中,为了提高RC的学习能力,我们提议自我调整RC(SM-RC),通过添加自调机制扩展RC(SM-RC)。自调机制通过两个配置变量实现:输入门和储油门。输入门调整输入信号,以及储油门调整储油层的动态特性。我们证明SM-RC(SM-RC)可以在输入信息信息被保留或丢弃的情况下执行关注任务。我们发现,由于SM-RC(S-RC)的学习结果,我们发现自调机制为RC(RC)提供质量上更高级的RC(R)升级的RC(R)系统(RO-RC)系统(L-SMA-S-SL)系统(SL-SMA-SL)实施一个更高级的新的结构,这需要在10个新的模型中实现一个更高的(SR-SMA-SMA-SMA-SMA-SMA-SM)新的结构中实现一个新的结构。