We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the hetero- and auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva's sparse distributed memory. We modify Kanerva's model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk.
翻译:我们考虑的是训练神经网络以存储具有最大噪声强度的一组模式的问题。 在最佳重量和州更新规则方面,通过培训每个神经元以最小重量规范的方式进行内核分类或内插来找到一个解决方案。通过将这种方法应用于进料前和经常性网络,我们获得了最佳模型,称为内核内存网络,其中包括过去几年来提出的许多异形和自动联合记忆模型,例如现代Hopfield网络和Kanerva分散的记忆。我们修改了Kanerva的模型,并展示了设计内核内存网络的简单方法,这种内核内存网络可以储存数量成倍的连续价值模式,具有一定的吸引力。内核内核内存网络框架提供了一种简单和直觉的方法来理解以前的记忆模型的存储能力,并允许在不直线的硬性非线性和合成交叉对话中进行新的生物解释。