A differentiable neural computer (DNC) is a memory augmented neural network devised to solve a wide range of algorithmic and question answering tasks and it showed promising performance in a variety of domains. However, its single memory-based operations are not enough to store and retrieve diverse informative representations existing in many tasks. Furthermore, DNC does not explicitly consider the memorization itself as a target objective, which inevitably leads to a very slow learning speed of the model. To address those issues, we propose a novel distributed memory-based self-supervised DNC architecture for enhanced memory augmented neural network performance. We introduce (i) a multiple distributed memory block mechanism that stores information independently to each memory block and uses stored information in a cooperative way for diverse representation and (ii) a self-supervised memory loss term which ensures how well a given input is written to the memory. Our experiments on algorithmic and question answering tasks show that the proposed model outperforms all other variations of DNC in a large margin, and also matches the performance of other state-of-the-art memory-based network models.
翻译:一种不同的神经计算机(DNC)是一种记忆增强神经网络,旨在解决一系列广泛的算法和问题回答任务,并显示在各个领域有良好的表现;然而,它单一的基于记忆的操作不足以储存和检索许多任务中存在的多种信息说明;此外,DNC没有明确将记忆化本身视为一个目标目标,这不可避免地导致模型学习速度非常慢;为了解决这些问题,我们提议建立一个新颖的分布式基于记忆的自监督的DNC结构,用于增强记忆增强神经网络性能;我们引入了(一)一个多分布式的存储记忆块机制,将信息独立存储到每个记忆块,并以合作的方式使用存储的信息,促进多种表达;以及(二)一个自我监督的记忆丧失术语,确保给记忆书写的投入有多好;我们关于算法和问题解答任务的实验表明,拟议的模型大大超越了DNC所有其他变异,也与其他基于记忆的网络模型的性能相匹配。