Differentiable neural computers extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently, namely an echo state network with an explicit memory without interference. This extension enables echo state networks to recognize all regular languages, including those that contractive echo state networks provably can not recognize. Further, we demonstrate experimentally that our model performs comparably to its fully-trained deep version on several typical benchmark tasks for differentiable neural computers.
翻译:不同的神经计算机可以不受干扰地扩展具有明确内存的人工神经网络,从而使该模型能够执行典型的计算任务,如图形穿梭等。然而,这些模型很难培训,需要很长的培训时间和庞大的数据集。在这项工作中,我们实现了不同神经计算机的一些计算能力,其模型可以非常高效地培训,即一个具有明确内存且没有干扰的回声状态网络。这一扩展使回声状态网络能够承认所有常规语言,包括收缩回声状态网络无法识别的语言。 此外,我们实验性地证明,我们的模型在不同的神经计算机的多项典型基准任务上,与经过充分训练的深层版本相比,具有可比性。