Measuring the generalization performance of a Deep Neural Network (DNN) without relying on a validation set is a difficult task. In this work, we propose exploiting Latent Geometry Graphs (LGGs) to represent the latent spaces of trained DNN architectures. Such graphs are obtained by connecting samples that yield similar latent representations at a given layer of the considered DNN. We then obtain a generalization score by looking at how strongly connected are samples of distinct classes in LGGs. This score allowed us to rank 3rd on the NeurIPS 2020 Predicting Generalization in Deep Learning (PGDL) competition.
翻译:在这项工作中,我们提议利用隐性几何图(LGGs)来代表经过培训的DNN结构的潜在空间,这些图解是通过连接样本获得的,这些样本在所考虑的DNN的某一层中产生类似的潜在代表。然后,我们通过研究LGGs中不同类别样本的连接程度,获得一个概括性分数。这使我们得以在NeurIPS 2020预测深层学习普及竞争中排名第三。