Continual Learning aims to bring machine learning into a more realistic scenario, where tasks are learned sequentially and the i.i.d. assumption is not preserved. Although this setting is natural for biological systems, it proves very difficult for machine learning models such as artificial neural networks. To reduce this performance gap, we investigate the question whether biologically inspired Hebbian learning is useful for tackling continual challenges. In particular, we highlight a realistic and often overlooked unsupervised setting, where the learner has to build representations without any supervision. By combining sparse neural networks with Hebbian learning principle, we build a simple yet effective alternative (HebbCL) to typical neural network models trained via the gradient descent. Due to Hebbian learning, the network have easily interpretable weights, which might be essential in critical application such as security or healthcare. We demonstrate the efficacy of HebbCL in an unsupervised learning setting applied to MNIST and Omniglot datasets. We also adapt the algorithm to the supervised scenario and obtain promising results in the class-incremental learning.
翻译:持续学习的目的是将机器学习带入更现实的情景中,即任务依次学习,而i.d.假设没有被保留。虽然这种环境对于生物系统来说是自然的,但对于人工神经网络等机器学习模式来说却证明非常困难。为了缩小这种性能差距,我们调查生物学启发的赫比亚学习是否有益于应对持续挑战的问题。特别是,我们强调一个现实的、往往被忽视的、不受监督的环境,即学习者必须在没有任何监督的情况下进行陈述。通过将稀疏的神经网络与赫比亚学习原则结合起来,我们建立了一种简单而有效的替代方法(HebbCL),取代通过梯度下降训练的典型神经网络模式。由于赫比亚学习,网络具有易于解释的重量,这可能是安全或保健等关键应用中必不可少的。我们展示了赫比克勒在适用于MNIST和Omniglot数据集的不受监督的学习环境中的功效。我们还根据受监督的情景调整算法,并在课堂学习中取得有希望的结果。