The backpropagation that drives the success of deep learning is most likely different from the learning mechanism of the brain. In this paper, we develop a biology-inspired learning rule that discovers features by local competitions among neurons, following the idea of Hebb's famous proposal. It is demonstrated that the unsupervised features learned by this local learning rule can serve as a pre-training model to improve the performance of some supervised learning tasks. More importantly, this local learning rule enables us to build a new learning paradigm very different from the backpropagation, named activation learning, where the output activation of the neural network roughly measures how probable the input patterns are. The activation learning is capable of learning plentiful local features from few shots of input patterns, and demonstrates significantly better performances than the backpropagation algorithm when the number of training samples is relatively small. This learning paradigm unifies unsupervised learning, supervised learning and generative models, and is also more secure against adversarial attack, paving a road to some possibilities of creating general-task neural networks.
翻译:推动深层学习成功的背面调整很可能与大脑的学习机制大不相同。 在本文中,我们根据赫布的著名提议,开发了生物学启发的学习规则,发现神经元之间当地竞争的特征。 事实证明,这一本地学习规则所学的未经监督的特征可以作为一种培训前模式,用来改进一些监督的学习任务。 更重要的是,这一本地学习规则使我们能够建立一个与后向调整非常不同的新的学习模式,即名为激活学习,神经网络的输出激活大致测量了输入模式的可能性。 激活的学习能够从少量输入模式中学习丰富的本地特征,并在培训样本数量相对较少时展示出比反调整算法更好的业绩。 这种学习模式能够使未经监督的学习、监督的学习和基因化模型更加安全,并且能够抵御对抗对抗性攻击,为创建一般任务神经网络铺平了一条道路。