In recent years, the machine learning community has seen a continuous growing interest in research aimed at investigating dynamical aspects of both training procedures and machine learning models. Of particular interest among recurrent neural networks we have the Reservoir Computing (RC) paradigm characterized by conceptual simplicity and a fast training scheme. Yet, the guiding principles under which RC operates are only partially understood. In this work, we analyze the role played by Generalized Synchronization (GS) when training a RC to solve a generic task. In particular, we show how GS allows the reservoir to correctly encode the system generating the input signal into its dynamics. We also discuss necessary and sufficient conditions for the learning to be feasible in this approach. Moreover, we explore the role that ergodicity plays in this process, showing how its presence allows the learning outcome to apply to multiple input trajectories. Finally, we show that satisfaction of the GS can be measured by means of the Mutual False Nearest Neighbors index, which makes effective to practitioners theoretical derivations.
翻译:近年来,机器学习界对旨在调查培训程序和机器学习模式动态方面的研究的兴趣不断增加。在经常性神经网络中,我们特别感兴趣的是 " 回收计算(RC) " 模式,其特点是概念简单和快速培训计划。然而,驻地协调员运作的指导原则只得到部分理解。在这项工作中,我们分析了通用合成(GS)在培训驻地协调员解决一项通用任务时所发挥的作用。特别是,我们展示了GS如何让储量库正确编码系统生成输入信号的动态。我们还讨论了在这一方法中学习是否可行的必要和充分条件。此外,我们探索了在这一过程中电子化的作用,展示了它的存在如何使学习成果适用于多重输入轨迹。最后,我们表明,可以通过共同假近邻指数来衡量GS的满意度,该指数使实践者理论衍生有效。