Drawing from memory the face of a friend you have not seen in years is a difficult task. However, if you happen to cross paths, you would easily recognize each other. The biological memory is equipped with an impressive compression algorithm that can store the essential, and then infer the details to match perception. The Willshaw Memory is a simple abstract model for cortical computations which implements mechanisms of biological memories. Using our recently proposed sparse coding prescription for visual patterns [34], this model can store and retrieve an impressive amount of real-world data in a fault-tolerant manner. In this paper, we extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework. In this setting, the memory stores several modalities (e.g., visual, or textual) of each pattern simultaneously. After training, the memory can be used to infer missing modalities when just a subset is perceived. Using a simple encoder-memory decoder architecture, and a newly proposed iterative retrieval algorithm for the Willshaw Model, we perform experiments on the MNIST dataset. By storing both the images and labels as modalities, a single Memory can be used not only to retrieve and complete patterns but also to classify and generate new ones. We further discuss how this model could be used for other learning tasks, thus serving as a biologically-inspired framework for learning.
翻译:从多年未见的朋友的记忆中绘制其面貌是一个困难的任务。 但是, 如果您碰巧遇到交叉路径, 您就会很容易认出对方。 生物记忆中包含一个令人印象深刻的压缩算法, 可以同时存储必要内容, 然后推断细节来匹配感知。 Willshaw Memory 是一个简单的皮层计算抽象模型, 用来执行生物记忆机制。 使用我们最近提议的关于视觉模式的稀疏编码处方[ 34], 这个模型可以以错误容忍的方式存储和检索大量真实世界数据。 在本文中, 我们通过使用多模式框架扩展基本集成记忆模型的能力。 在此设置中, 记忆中存储着每个模式的几种模式( 如视觉或文字) 。 训练后, 记忆可以用来在看到一个子集时推断缺失的模式。 使用一个简单的编码- 模版解码结构, 以及一个新的提议的威尔肖模型的迭代检索算算法, 我们通过使用多功能化的模型进行实验。 通过存储图像和标签作为模式, 我们也可以使用一个单一的模型来进行学习。</s>