Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes. That implies a tantalizing possibility of scaling them up beyond a curated "seed" benchmark, to incorporating more unlabeled images from the internet-scale external sources to enhance its performance. However, in practice, larger amount of unlabeled data will require more computing resources due to the bigger model size and longer training needed. Moreover, open-world unlabeled data usually follows an implicit long-tail class or attribute distribution, many of which also do not belong to the target classes. Blindly leveraging all unlabeled data hence can lead to the data imbalance as well as distraction issues. This motivates us to seek a principled approach to strategically select unlabeled data from an external source, in order to learn generalizable, balanced and diverse representations for relevant classes. In this work, we present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK), which follows three simple principles: (1) tailness, which encourages sampling of examples from tail classes, by sorting the empirical contrastive loss expectation (ECLE) of samples over random data augmentations; (2) proximity, which rejects the out-of-distribution outliers that may distract training; and (3) diversity, which ensures diversity in the set of sampled examples. Empirically, using ImageNet-100-LT (without labels) as the seed dataset and two "noisy" external data sources, we demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features, as evaluated via linear classifier evaluation on full-shot and few-shot settings. The code is available at: https://github.com/VITA-Group/MAK
翻译:对比式学习方法在学习视觉表现方面取得了巨大成功,目标类标签很少。 这意味着在学习视觉表现方面取得了巨大的成功, 目标类标签很少。 这意味着盲目地利用所有未标记的数据, 从而导致数据失衡和持续分散问题。 这促使我们寻求一种有原则的方法,从外部源中战略性地选择未标记的数据, 以便从外部源中学习通用、 平衡和多样的表达方式。 然而, 在这项工作中, 我们提出了一个开放世界无标记的数据取样框架, 称为Model-Aware K-center (MAK), 它遵循三个简单的原则:(1) 尾巴, 它鼓励从尾巴类中提取实例, 通过分解实验性对比性显示数据失衡以及持续分散的问题。 (ECLE) 这促使我们寻求一种有原则的方法,从外部源中从战略上选择未标记的数据, 以便学习通用、均衡和多样的表达方式。 (ECLE) 使用不精确的模板, 可以确定数据多样性的准确性数据, 并且通过直接的排序, 将数据排序中的数据排序中, 确定。