"Sparse" neural networks, in which relatively few neurons or connections are active, are common in both machine learning and neuroscience. Whereas in machine learning, "sparsity" is related to a penalty term that leads to some connecting weights becoming small or zero, in biological brains, sparsity is often created when high spiking thresholds prevent neuronal activity. Here we introduce sparsity into a reservoir computing network via neuron-specific learnable thresholds of activity, allowing neurons with low thresholds to contribute to decision-making but suppressing information from neurons with high thresholds. This approach, which we term "SpaRCe", optimises the sparsity level of the reservoir without affecting the reservoir dynamics. The read-out weights and the thresholds are learned by an on-line gradient rule that minimises an error function on the outputs of the network. Threshold learning occurs by the balance of two opposing forces: reducing inter-neuronal correlations in the reservoir by deactivating redundant neurons, while increasing the activity of neurons participating in correct decisions. We test SpaRCe on classification problems and find that threshold learning improves performance compared to standard reservoir computing. SpaRCe alleviates the problem of catastrophic forgetting, a problem most evident in standard echo state networks and recurrent neural networks in general, due to increasing the number of task-specialised neurons that are included in the network decisions.
翻译:“ 分离” 神经神经网络在机器学习和神经科学中都很常见。 在机器学习中, “ 平衡” 与惩罚性术语相关, 导致生物大脑中某些连接重量变小或零, 在生物大脑中, 当高悬浮阈值防止神经活动时, 通常会形成松散。 我们在这里通过神经特异的可学习活动阈值, 将孔隙引入储油层计算网络, 允许门槛低的神经神经元为决策作出贡献, 但却抑制来自神经神经系高阈值的信息。 这个方法, 我们称之为“ SpaRCe ”, 优化水库的宽度水平而不影响储油层的动态。 读出重量和阈值通过在线梯度规则学习, 最大限度地减少网络输出的错误功能。 在两个对立力量的平衡下, 悬着学习: 减少储油层中的中中中中神经系的相关性, 同时增加神经系参与正确决策的活动。 我们测试SpaRC 的Sparrorume 问题是如何在Sprain commission commission commission commission 中不断 学习标准 问题 。