We formally study how ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using knowledge distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the SAME architecture, trained using the SAME algorithm on the SAME data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in Deep Learning works very differently from traditional learning theory (such as boosting or NTKs, neural tangent kernels). To properly understand them, we develop a theory showing that when data has a structure we refer to as ``multi-view'', then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model by training a single model to match the output of the ensemble instead of the true label. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the ``dark knowledge'' is hidden in the outputs of the ensemble and can be used in distillation. In the end, we prove that self-distillation can also be viewed as implicitly combining ensemble and knowledge distillation to improve test accuracy.
翻译:我们正式研究深层次学习模型的组合如何能提高测试准确性,以及组合的优异性能如何通过知识蒸馏而用知识蒸馏为单一模型。我们考虑了这样一个具有挑战性的情况,即组合仅仅是几个独立训练的神经网络中与SAME结构的输出量的平均值,在SAME数据集上使用SAME算法进行了培训,它们仅与初始化时使用的随机种子不同。我们显示深层学习中的组合/知识蒸馏与传统学习理论(如提炼或NTKs,神经凝固内核)的出色性能非常不同。为了正确理解它们,我们开发了一个理论,表明当数据结构被我们称为“多视角”时,独立训练的神经网络的组合可以提高测试准确性,而这种高级测试性精度也可以通过训练一个单一模型来与组合的输出相匹配,而不是真实标签。我们关于精细精细精细精细的精准性能和精准性知识的精细性展示,我们如何将精细的精细的精细的精细的精准性知识变成一个完全不同的测试,我们从深的精细的精细的精细的精细的精细的精细的精细的精准性学习过程中学习过程可以被利用。