Recurrent neural networks are a powerful means to cope with time series. We show how linear, i.e., linearly activated recurrent neural networks (LRNNs) can approximate any time-dependent function f(t) given by a number of function values. The approximation can effectively be learned by simply solving a linear equation system; no backpropagation or similar methods are needed. Furthermore, the size of an LRNN can be reduced significantly in one step, after inspecting the eigenvalues of the network transition matrix, by taking only the most relevant components. Therefore, in contrast to others, we do not only learn network weights but also the network architecture. LRNNs have interesting properties: They end up in ellipse trajectories in the long run and allow the prediction of further values and compact representations of functions. We demonstrate this by several experiments, among them multiple superimposed oscillators (MSO), robotic soccer, and predicting stock prices. LRNNs outperform the previous state-of-the-art for the MSO task with a minimal number of units.
翻译:经常性神经网络是处理时间序列的强大手段。 我们展示了线性, 即线性激活的经常性神经网络( LANNs) 如何通过若干函数值来近似任何时间性函数 f( t) 。 光解决线性方程系统就能有效地了解近似; 不需要反向转换或类似的方法。 此外, 在检查网络过渡矩阵的精度值之后, 仅使用最相关的组件, 就可以大幅降低 LNN 的大小。 因此, 与其他部分相比, 我们不仅学习网络重量, 而且还学习网络结构。 LNNS 具有有趣的特性: 它们在长期的椭圆轨中结束, 从而可以预测进一步的数值和功能的缩略图。 我们通过若干实验, 其中包括多个超振动振动器、 机器人足球和预测股票价格, 来证明这一点。 LNNNNPs 超越了以前在MOO任务上最先进的状态, 并且只有少量的单位。