Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approach this problem by combining a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule. The resulting network model incorporates many known biological properties: unsupervised learning, Hebbian plasticity, sparse distributed activations, sparse connectivity, columnar and laminar cortical architecture, etc. We evaluate the synergistic effects of the feedforward and recurrent network components in complex pattern recognition tasks on the MNIST handwritten digits dataset. We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations. The associative memory is also shown to perform prototype extraction from the training data and make the representations robust to severely distorted input. We argue that several aspects of the proposed integration of feedforward and recurrent computations are particularly attractive from a machine learning perspective.
翻译:大量重复出现的新气候网络计算时, 隐性记忆一直是一个突出的候选。 实施关联性记忆的吸引者网络为许多认知现象提供了机械化的解释。 但是, 吸引者记忆模型通常被训练为使用垂直或随机模式, 以避免记忆间干扰, 这使得这些模型不适合自然发生的复杂关联性刺激图像。 我们通过将一个经常性吸引者网络与一个饲料前向网络相结合, 学习使用不受监督的赫比安- 拜耶斯学习规则进行分布式表达。 由此形成的网络模型包含许多已知的生物特性: 不受监督的学习、 Hebbbian 塑料性、 分散式激活、 稀少连接、 连通性、 列和 laminar cortical 结构等。 我们评估了MNIST手写数字数据集中复杂模式识别任务中种子的网络组件和经常性网络组件的协同效应。 我们证明, 经常性的吸引者组件在接受过未经监督的外向上驱动的内部( hiden) 表达式演示时, 连接性记忆还显示从培训中进行原型的提取数据, 并且从多次进行严格的转换, 我们论证的经常性的模拟模拟分析, 的模拟模拟分析分析中提出了各种的模型分析。