This paper targets the problem of encoding information into binary cell assemblies. Spiking neural networks and k-winners-take-all models are two common approaches, but the first is hard to use for information processing and the second is too simple and lacks important features of the first. We present an intermediate model that shares the computational ease of kWTA and has more flexible and richer dynamics. It uses explicit inhibitory neurons to balance and shape excitation through an iterative procedure. This leads to a recurrent interaction between inhibitory and excitatory neurons that better adapts to the input distribution and performs such computations as habituation, decorrelation, and clustering. To show these, we investigate Hebbian-like learning rules and propose a new learning rule for binary weights with multiple stabilization mechanisms. Our source code is publicly available.
翻译:本文针对将信息编码为二元细胞组件的问题。 Spiking 神经网络和 k-winners- take- all 模型是两种共同的方法,但第一种是难以用于信息处理的,第二种是过于简单,缺乏第一种的重要特征。我们提出了一个中间模型,分享 kWTA 的计算易懂性,并具有更灵活和更丰富的动态。它使用显性抑制性神经元通过迭接程序来平衡和塑造振动。这导致抑制性和刺激性神经元之间的反复互动,从而更好地适应输入分布并进行诸如惯用、装饰和组合等计算。为了展示这些,我们调查了类似Hebbian 的学习规则,并提出了一个新的学习规则,用于使用多种稳定机制的二元重量。我们的源代码是公开的。