Recurrent neural networks (RNNs) are brain-inspired models widely used in machine learning for analyzing sequential data. The present work is a contribution towards a deeper understanding of how RNNs process input signals using the response theory from nonequilibrium statistical mechanics. For a class of continuous-time stochastic RNNs (SRNNs) driven by an input signal, we derive a Volterra type series representation for their output. This representation is interpretable and disentangles the input signal from the SRNN architecture. The kernels of the series are certain recursively defined correlation functions with respect to the unperturbed dynamics that completely determine the output. Exploiting connections of this representation and its implications to rough paths theory, we identify a universal feature -- the response feature, which turns out to be the signature of tensor product of the input signal and a natural support basis. In particular, we show that the SRNNs can be viewed as kernel machines operating on a reproducing kernel Hilbert space associated with the response feature.
翻译:经常性神经网络(RNNS)是大脑启发型模型,在机器学习中广泛用于分析连续数据。目前的工作有助于更深入地了解RNS如何使用来自无平衡统计机理的反应理论处理输入信号。对于一组由输入信号驱动的连续时间随机神经网络(SRNNS),我们为其输出产生一个伏特拉型序列表示。这个表示可以解释,并分解来自SRNNN结构的输入信号。该系列的内核是某些反复定义的关联功能,与完全决定输出的无干扰动态有关。我们利用这一表达方式的连接及其与粗路径理论的影响,我们确定了一个通用特征 -- -- 即反应特征,该特征是输入信号的压强产品的签名和一个自然支持基础。特别是,我们表明,可以将SRNS视为在与响应特征相关的再生内核希尔伯特空间上运行的内核机器。