Learning by self-explanation is an effective learning technique in human learning, where students explain a learned topic to themselves for deepening their understanding of this topic. It is interesting to investigate whether this explanation-driven learning methodology broadly used by humans is helpful for improving machine learning as well. Based on this inspiration, we propose a novel machine learning method called learning by self-explanation (LeaSE). In our approach, an explainer model improves its learning ability by trying to clearly explain to an audience model regarding how a prediction outcome is made. LeaSE is formulated as a four-level optimization problem involving a sequence of four learning stages which are conducted end-to-end in a unified framework: 1) explainer learns; 2) explainer explains; 3) audience learns; 4) explainer re-learns based on the performance of the audience. We develop an efficient algorithm to solve the LeaSE problem. We apply LeaSE for neural architecture search on CIFAR-100, CIFAR-10, and ImageNet. Experimental results strongly demonstrate the effectiveness of our method.
翻译:通过自我规划学习是人类学习中的一种有效的学习技术,学生们在学习过程中向自己解释一个学习课题,以加深对这个课题的理解。调查人类广泛使用的这种由解释驱动的学习方法是否也有助于改进机器学习也是有益的。基于这一启发,我们提出一种新型机器学习方法,称为通过自我规划学习(LeaSE ) 。在我们的方法中,一个解释模型通过试图向受众明确解释如何作出预测结果的模型来提高学习能力。LeaSE是四级优化问题,涉及四个学习阶段的顺序,在统一的框架内进行最后到最后的学习:1) 解释者学习;2解释者解释;3)受众学习;4)根据观众的表现进行解释;4)根据观众的表现进行解释者再学习。我们开发一种高效的算法来解决LeaSEE问题。我们在CFAR-100、CIFAR-10和图像网络上应用LeaSESE来进行神经结构研究。实验结果有力地展示了我们的方法的有效性。