Measuring concept generalization, i.e., the extent to which models trained on a set of (seen) visual concepts can be leveraged to recognize a new set of (unseen) concepts, is a popular way of evaluating visual representations, especially in a self-supervised learning framework. Nonetheless, the choice of unseen concepts for such an evaluation is usually made arbitrarily, and independently from the seen concepts used to train representations, thus ignoring any semantic relationships between the two. In this paper, we argue that the semantic relationships between seen and unseen concepts affect generalization performance and propose ImageNet-CoG, a novel benchmark on the ImageNet-21K (IN-21K) dataset that enables measuring concept generalization in a principled way. Our benchmark leverages expert knowledge that comes from WordNet in order to define a sequence of unseen IN-21K concept sets that are semantically more and more distant from the ImageNet-1K (IN-1K) subset, a ubiquitous training set. This allows us to benchmark visual representations learned on IN-1K out-of-the box. We conduct a large-scale study encompassing 31 convolution and transformer-based models and show how different architectures, levels of supervision, regularization techniques and use of web data impact the concept generalization performance.
翻译:测量一般化概念,即就一组(见)视觉概念所培训的模型能够在多大程度上用于认识一套新的(见)概念,是评价视觉表现的一种普遍方式,特别是在自我监督的学习框架内。然而,为这种评价选择不可见的概念,通常都是任意的,与用来培训表现的已知概念无关,从而忽视了两者之间的任何语义关系。在本文中,我们争辩说,所见概念和不可见概念之间的语义关系会影响一般化业绩,并提议图像网-CoG,这是图像网-21K(IN-21K)数据集的新基准,能够以有原则的方式衡量概念的一般化。我们的基准利用了WordNet的专家知识,以便界定“IN-21K”概念的序列,这种概念在语义上越来越远离图像网-1K(IN-1K)子集,这是一种常见的培训。这使我们能够对在IN-1K外框上所学得的视觉表现进行基准。我们进行了一个大规模研究,包括31个变革和变换的网络模型,并展示了不同层次的网络化分析,并展示了各种业绩模型。