When training neural networks for classification tasks with backpropagation, parameters are updated on every trial, even if the sample is classified correctly. In contrast, humans concentrate their learning effort on errors. Inspired by human learning, we introduce lazy learning, which only learns on incorrect samples. Lazy learning can be implemented in a few lines of code and requires no hyperparameter tuning. Lazy learning achieves state-of-the-art performance and is particularly suited when datasets are large. For instance, it reaches 99.2% test accuracy on Extended MNIST using a single-layer MLP, and does so 7.6x faster than a matched backprop network
翻译:在使用反向传播训练神经网络进行分类任务时,即使样本分类正确,参数也会在每个试验中更新。相反,人类将学习重点集中在错误上。受人类学习启发,我们引入了懒惰学习,它只在错误的样本上进行学习。懒惰学习可以用几行代码实现,无需超参数调整。懒惰学习可以在数据集大且时间限制下达到最先进的性能。例如,使用单层MLP,在扩展MNIST上达到99.2%的测试准确性,比匹配到的反向传播网络快7.6倍。