We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task. Our bounds utilize a quantity called the majority predictor accuracy, which can be computed efficiently from data. We show that our theory is useful in practice since it implies that the majority predictor accuracy can be used as a transferability measure, a fact that is also validated by our experiments.
翻译:我们分析了通过将学习从源头转移到目标任务而培训的深层学习模式的新的一般化界限。 我们的界限使用一个称为多数预测准确度的数量,这可以从数据中有效计算出来。 我们表明,我们的理论在实践中是有用的,因为它意味着多数预测准确度可以作为一种可转移性衡量标准使用,这一事实也得到我们的实验的验证。