Interleaving learning is a human learning technique where a learner interleaves the studies of multiple topics, which increases long-term retention and improves ability to transfer learned knowledge. Inspired by the interleaving learning technique of humans, in this paper we explore whether this learning methodology is beneficial for improving the performance of machine learning models as well. We propose a novel machine learning framework referred to as interleaving learning (IL). In our framework, a set of models collaboratively learn a data encoder in an interleaving fashion: the encoder is trained by model 1 for a while, then passed to model 2 for further training, then model 3, and so on; after trained by all models, the encoder returns back to model 1 and is trained again, then moving to model 2, 3, etc. This process repeats for multiple rounds. Our framework is based on multi-level optimization consisting of multiple inter-connected learning stages. An efficient gradient-based algorithm is developed to solve the multi-level optimization problem. We apply interleaving learning to search neural architectures for image classification on CIFAR-10, CIFAR-100, and ImageNet. The effectiveness of our method is strongly demonstrated by the experimental results.
翻译:互换学习是一种人类学习技术,学习者在这种技术中中断对多个专题的研究,从而增加长期保留,提高传授知识的能力。受人类相互交流学习技术的启发,我们在本文件中探讨这一学习方法是否也有利于改善机器学习模型的性能。我们提议了一个称为互连学习(IL)的新颖的机器学习框架。在我们的框架中,一套模型合作学习以互连方式进行的数据编码器:编码器在一段时间里用模型1培训,然后转而用模型2进行进一步培训,然后采用模型3,等等;在所有模型培训后,编码器返回模型1,然后再次接受培训,然后转而转至模型2,3等。这一过程重复进行多轮。我们的框架基于多层次优化,由多个相互关联的学习阶段组成。一个高效的梯度基算法是解决多层次优化问题。我们应用相互学习的学习学习方法来搜索神经结构,以便在CIFAR-10、CIFAR-100和图像网络上进行图像分类。通过实验性地展示了我们的方法的有效性。