Despite the recent success of artificial neural networks, more biologically plausible learning methods may be needed to resolve the weaknesses of backpropagation trained models such as catastrophic forgetting and adversarial attacks. A novel local learning rule is presented that performs online clustering with a maximum limit of the number of cluster to be found rather than a fixed cluster count. Instead of using orthogonal weight or output activation constraints, activation sparsity is achieved by mutual repulsion of lateral Gaussian neurons ensuring that multiple neuron centers cannot occupy the same location in the input domain. An update method is also presented for adjusting the widths of the Gaussian neurons in cases where the data samples can be represented by means and variances. The algorithms were applied on the MNIST and CIFAR-10 datasets to create filters capturing the input patterns of pixel patches of various sizes. The experimental results demonstrate stability in the learned parameters across a large number of training samples.
翻译:尽管人造神经网络最近取得了成功,但可能需要采用更具有生物依据的学习方法来解决诸如灾难性的遗忘和对抗性攻击等经培训的反向传播模型的弱点。提出了一种新的本地学习规则,在可找到的组群数量的最大限度而不是固定的组群数内进行在线集群。不使用正方形重量或产出激活限制,而是通过相互排斥横向高斯神经元,确保多个神经中心在输入域中不能占据同一位置,从而实现激活宽度。还提出了一个更新方法,以便在数据样本能够以手段和差异代表数据样本的情况下调整高斯神经元的宽度。算法应用在MNIST和CIFAR-10数据集上,以建立过滤器,捕捉不同大小的像素补丁的输入模式。实验结果显示,在大量培训样品中,学到的参数是稳定的。