Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer architecture. To overcome this bottleneck, we propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy. This is achieved by a content-based attention mechanism that represents unrelated items in the computational memory with uncorrelated HD vectors, whose real-valued components can be readily approximated by binary, or bipolar components. Experimental results demonstrate the efficacy of our approach on few-shot image classification tasks on the Omniglot dataset using more than 256,000 phase-change memory devices. Our approach effectively merges the richness of deep neural network representations with HD computing that paves the way for robust vector-symbolic manipulations applicable in reasoning, fusion, and compression.
翻译:传统神经网络需要大量数据,以在阻碍其再学习和适应新数据的能力的缓慢培训过程中建立复杂的绘图。内存增强神经网络增强神经网络,具有明确的记忆,以克服这些问题。然而,通过每个个人记忆条目的软读写操作获取这种明确的记忆,在使用常规的冯纽曼计算机结构实施时造成瓶颈。为了克服这一瓶颈,我们提议一个强大的结构,将一个计算记忆单位作为计算记忆单位,在高维矢量(HD)矢量上进行模拟的模拟计算,同时与32位软件等值准确性密切匹配。这是通过一个基于内容的注意机制实现的,它代表计算记忆中与不相关的项目,与不相关的HD矢量进行不相干读写操作,其实际价值部分很容易被二进制或双极组件所近。实验结果表明,我们利用超过256,000个阶段记忆装置在Omniglot数据集上进行微图像分类工作的方法是有效的。我们的方法有效地结合了深层神经网络的丰富程度,在可应用的血管操纵中,以及可应用的流感应变的流体操控式的流数据。