Self-supervised clustering methods have achieved increasing accuracy in recent years but do not yet perform as well as supervised classification methods. This contrasts with the situation for feature learning, where self-supervised features have recently surpassed the performance of supervised features on several important tasks. We hypothesize that the performance gap is due to the difficulty of specifying, without supervision, which features correspond to class differences that are semantic to humans. To reduce the performance gap, we introduce the "single-noun" prior - which states that semantic clusters tend to correspond to concepts that humans label by a single-noun. By utilizing a pre-trained network that maps images and sentences into a common space, we impose this prior obtaining a constrained optimization task. We show that our formulation is a special case of the facility location problem, and introduce a simple-yet-effective approach for solving this optimization task at scale. We test our approach on several commonly reported image clustering datasets and obtain significant accuracy gains over the best existing approaches.
翻译:近年来,自我监督的集群方法已经达到更高的准确性,但还没有达到监督的分类方法。这与特征学习的情况不同,特征学习的情况是,自我监督的特征最近超过了若干重要任务的监督特征。我们假设,绩效差距是由于难以在没有监督的情况下具体说明哪些特征与对人类而言具有语义性的等级差异相对应。为了缩小性能差距,我们引入了“单词”之前的“单词”方法,指出语义集群往往与人类用单词标注的概念相对应。通过使用预先训练的网络将图像和句子映射到一个共同空间,我们在获得限制优化的任务之前就强制实施这一功能。我们表明,我们的配方是设施定位问题的一个特殊案例,并采用简单而有效的方法大规模地解决这一优化任务。我们用一些通常报告的图像组合数据集测试我们的方法,并在现有最佳方法上取得显著的准确性收益。