Learning from mistakes is an effective learning approach widely used in human learning, where a learner pays greater focus on mistakes to circumvent them in the future to improve the overall learning outcomes. In this work, we aim to investigate how effectively we can leverage this exceptional learning ability to improve machine learning models. We propose a simple and effective multi-level optimization framework called learning from mistakes using class weighting (LFM-CW), inspired by mistake-driven learning to train better machine learning models. In this formulation, the primary objective is to train a model to perform effectively on target tasks by using a re-weighting technique. We learn the class weights by minimizing the validation loss of the model and re-train the model with the synthetic data from the image generator weighted by class-wise performance and real data. We apply our LFM-CW framework with differential architecture search methods on image classification datasets such as CIFAR and ImageNet, where the results show that our proposed strategy achieves lower error rate than the baselines.
翻译:从错误中学习是一种有效的学习方法,在人类学习中广泛使用,学习者更注重将来避免错误,从而避免错误,从而改善总体学习成果。在这项工作中,我们的目标是调查我们如何能够有效地利用这种特殊的学习能力来改进机器学习模式。我们提出了一个简单有效的多层次优化框架,称为利用班级加权(LFM-CW)从错误中学习,这一框架受错误驱动的学习的启发,以培训更好的机器学习模式。在这一编制中,主要目标是培训一种模型,以便通过使用重新加权技术有效地完成目标任务。我们通过尽量减少模型的验证损失来学习班级权重,并用根据阶级业绩和实际数据加权的图像生成器的合成数据对模型进行再培训。我们采用了我们的LFM-CW框架,在图像分类数据集上采用了不同的结构搜索方法,如CIFAR和图像网络,其结果显示,我们拟议的战略的错误率低于基线。