Motivated by biological considerations, we study sparse neural maps from an input layer to a target layer with sparse activity, and specifically the problem of storing $K$ input-target associations $(x,y)$, or memories, when the target vectors $y$ are sparse. We mathematically prove that $K$ undergoes a phase transition and that in general, and somewhat paradoxically, sparsity in the target layers increases the storage capacity of the map. The target vectors can be chosen arbitrarily, including in random fashion, and the memories can be both encoded and decoded by networks trained using local learning rules, including the simple Hebb rule. These results are robust under a variety of statistical assumptions on the data. The proofs rely on elegant properties of random polytopes and sub-gaussian random vector variables. Open problems and connections to capacity theories and polynomial threshold maps are discussed.
翻译:基于生物考虑,我们研究从输入层到活动稀少的目标层的稀有神经图,特别是当目标矢量稀少时储存输入目标组合$(x,y)美元或记忆的问题。我们在数学上证明,K$经历了一个阶段的过渡,目标层的宽度一般和有些自相矛盾地提高了地图的储存能力。目标矢量可以任意选择,包括随机选择,记忆可以由经过当地学习规则(包括简单的Hebb规则)培训的网络编码和解码。这些结果在数据上的各种统计假设下是强有力的。证据依赖于随机多式多台式和次双陆式矢量随机变量的优雅性。讨论了与能力理论和多元值阈值地图的公开问题和关联。