Recently, the concept of teaching has been introduced into machine learning, in which a teacher model is used to guide the training of a student model (which will be used in real tasks) through data selection, loss function design, etc. Learning to reweight, which is a specific kind of teaching that reweights training data using a teacher model, receives much attention due to its simplicity and effectiveness. In existing learning to reweight works, the teacher model only utilizes shallow/surface information such as training iteration number and loss/accuracy of the student model from training/validation sets, but ignores the internal states of the student model, which limits the potential of learning to reweight. In this work, we propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model, and the teacher model returns adaptive weights of training samples to enhance the training of the student model. The teacher model is jointly trained with the student model using meta gradients propagated from a validation set. Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
翻译:最近,教学概念被引入机器学习,在机器学习中使用了教师模型,通过数据选择、丢失功能设计等,指导学生模式(将用于实际任务)的培训。 学习再加权是一种特定类型的教学,使用教师模型对培训数据进行再加权,因其简单和有效性而得到极大关注。在现有关于再加权工作的学习中,教师模型只使用浅/表面信息,如培训迭代数和从培训/校准组得到的学生模型的损失/准确性等培训,但忽略了学生模型的内部状态,该状态限制了学习再加权的潜力。在这项工作中,我们建议改进数据再加权算法,学生模型向教师模型提供内部状态,教师模型将培训样本的适应性加权数用于加强学生模型的培训。教师模型与学生模型共同培训时使用了从校准集中传播的元梯度。用清洁/神经标签和神经机器翻译对图像分类进行实验,实验表明我们的算法比以往方法有了重大改进。