Measuring concept generalization, i.e., the extent to which models trained on a set of (seen) visual concepts can be used to recognize a new set of (unseen) concepts, is a popular way of evaluating visual representations, especially when they are learned with self-supervised learning. Nonetheless, the choice of which unseen concepts to use is usually made arbitrarily, and independently from the seen concepts used to train representations, thus ignoring any semantic relationships between the two. In this paper, we argue that semantic relationships between seen and unseen concepts affect generalization performance and propose ImageNet-CoG, a novel benchmark on the ImageNet dataset that enables measuring concept generalization in a principled way. Our benchmark leverages expert knowledge that comes from WordNet in order to define a sequence of unseen ImageNet concept sets that are semantically more and more distant from the ImageNet-1K subset, a ubiquitous training set. This allows us to benchmark visual representations learned on ImageNet-1K out-of-the box: we analyse a number of such models from supervised, semi-supervised and self-supervised approaches under the prism of concept generalization, and show how our benchmark is able to uncover a number of interesting insights. We will provide resources for the benchmark at https://europe.naverlabs.com/cog-benchmark.
翻译:测量一般化概念,即,在一组(见)视觉概念上培训的模型能够在多大程度上用来承认一套新的(见)视觉概念,是评价视觉表现的一种流行方式,特别是在通过自我监督学习时。然而,选择哪些不可见的概念通常被任意使用,与用来培训演示的已知概念无关,从而忽视了两者之间的任何语义关系。在本文中,我们争辩说,视觉概念和看不见概念之间的语义关系会影响一般化性能,并提出图像网-CoG,这是一个图像网数据集的新基准,能够以有原则的方式衡量概念的一般化。我们的基准利用了WordNet的专家知识,以定义一系列不可见的图像网概念,这些概念在语义上越来越远离图像网-1K子集,这是一套常见的培训。这使我们能够根据在图像网-1K外框上所学的视觉表现进行基准化:我们从监督、半监督和自我监督的图像网-CoG数据集中分析若干个模型,从而能够测量概念的通用性一般化方法。我们在GLEB基准/GAR 下,将展示一个令人感兴趣的直观化的基准概念。