Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time. When there is no immediate relationship between subsequent data points (like when the data points are generated at random, e.g.), we show that RNNs are still able to remember a few data points back into the sequence by memorizing them by heart using standard backpropagation. However, we also show that for classical RNNs, LSTM and GRU networks the distance of data points between recurrent calls that can be reproduced this way is highly limited (compared to even a loose connection between data points) and subject to various constraints imposed by the type and size of the RNN in question. This implies the existence of a hard limit (way below the information-theoretic one) for the distance between related data points within which RNNs are still able to recognize said relation.
翻译:所有不同类型的经常性神经网络(RNN)都常见于模拟数据点之间的关系的意图。 当随后的数据点之间没有直接关系时(例如数据点是随机产生的,例如),我们表明,RNN仍然能够用标准的反向转换法将数据点按心部对数据点进行校正,从而在序列中记住几个数据点。然而,我们还表明,对于传统的RNN、LSTM和GRU网络来说,能够以这种方式复制的经常性电话之间数据点的距离非常有限(甚至数据点之间联系松散),并受到相关RNN的类型和大小所施加的各种限制,这意味着在相关数据点之间的距离上存在一个硬界限(远低于信息理论一),而区域N仍然能够识别上述关系。