Despite its great success, backpropagation has certain limitations that necessitate the investigation of new learning methods. In this study, we present a biologically plausible local learning rule that improves upon Hebb's well-known proposal and discovers unsupervised features by local competitions among neurons. This simple learning rule enables the creation of a forward learning paradigm called activation learning, in which the output activation (sum of the squared output) of the neural network estimates the likelihood of the input patterns, or "learn more, activate more" in simpler terms. For classification on a few small classical datasets, activation learning performs comparably to backpropagation using a fully connected network, and outperforms backpropagation when there are fewer training samples or unpredictable disturbances. Additionally, the same trained network can be used for a variety of tasks, including image generation and completion. Activation learning also achieves state-of-the-art performance on several real-world datasets for anomaly detection. This new learning paradigm, which has the potential to unify supervised, unsupervised, and semi-supervised learning and is reasonably more resistant to adversarial attacks, deserves in-depth investigation.
翻译:尽管取得了巨大成功,反向调整还是有一些限制,因此有必要对新的学习方法进行调查。在这项研究中,我们提出了一个生物上可信的本地学习规则,它改进了Hebb的著名建议,并发现神经元之间当地竞争者不受监督的特点。这一简单学习规则使得能够创建一个称为激活学习的前瞻性学习范式,在这种范式中,神经网络的产出激活(平方输出总和)估计了输入模式的可能性,或者用更简便的术语“更精细的,更多”进行。对于几个小型古典数据集的分类来说,激活学习使用完全连接的网络进行反向调整,在培训样本减少或无法预测的干扰时进行反向调整。此外,同样的经过培训的网络可以用于各种任务,包括图像生成和完成。激活学习还能够在一些真实世界的数据集上实现最先进的性表现,以便发现异常现象。这种新的学习范式有可能统一监管、不受监督、半监控的学习,并且可以进行更深入地进行对抗对抗性攻击。