Coreset selection is among the most effective ways to reduce the training time of CNNs, however, only limited is known on how the resultant models will behave under variations of the coreset size, and choice of datasets and models. Moreover, given the recent paradigm shift towards transformer-based models, it is still an open question how coreset selection would impact their performance. There are several similar intriguing questions that need to be answered for a wide acceptance of coreset selection methods, and this paper attempts to answer some of these. We present a systematic benchmarking setup and perform a rigorous comparison of different coreset selection methods on CNNs and transformers. Our investigation reveals that under certain circumstances, random selection of subsets is more robust and stable when compared with the SOTA selection methods. We demonstrate that the conventional concept of uniform subset sampling across the various classes of the data is not the appropriate choice. Rather samples should be adaptively chosen based on the complexity of the data distribution for each class. Transformers are generally pretrained on large datasets, and we show that for certain target datasets, it helps to keep their performance stable at even very small coreset sizes. We further show that when no pretraining is done or when the pretrained transformer models are used with non-natural images (e.g. medical data), CNNs tend to generalize better than transformers at even very small coreset sizes. Lastly, we demonstrate that in the absence of the right pretraining, CNNs are better at learning the semantic coherence between spatially distant objects within an image, and these tend to outperform transformers at almost all choices of the coreset size.
翻译:核心选择是减少CNN培训时间的最有效方法之一,然而,对于由此产生的模型在核心设置大小和选择数据集和模型的不同情况下将如何运行,人们所知的有限。此外,鉴于最近向变压器模型的范式转变,核心选择将如何影响其性能,这仍然是一个未决问题。为了广泛接受核心设置选择方法,需要回答一些相似的令人感兴趣的问题,本文试图回答其中一些问题。我们提出了一个系统的基准设置,并对CNN和变压器的不同核心设置选择方法进行严格的比较。我们的调查显示,在某些情况下,随机选择子集会比SOTA选择方法更强大和稳定。我们证明,不同数据类别的统一子取样常规概念并不合适。根据每个类数据分布的复杂程度来选择样本。变压器一般在大型数据集上通常先入手,我们显示某些目标设定的数据集甚至有助于将其性能稳定在非常小的变压型模型之间。当我们使用这些核心变压式模型时,我们没有在更小的变压式模型中进行更精确的变压。</s>