We present federated momentum contrastive clustering (FedMCC), a learning framework that can not only extract discriminative representations over distributed local data but also perform data clustering. In FedMCC, a transformed data pair passes through both the online and target networks, resulting in four representations over which the losses are determined. The resulting high-quality representations generated by FedMCC can outperform several existing self-supervised learning methods for linear evaluation and semi-supervised learning tasks. FedMCC can easily be adapted to ordinary centralized clustering through what we call momentum contrastive clustering (MCC). We show that MCC achieves state-of-the-art clustering accuracy results in certain datasets such as STL-10 and ImageNet-10. We also present a method to reduce the memory footprint of our clustering schemes.
翻译:我们展示了联合动力对比组合(FedMCC),这是一个学习框架,不仅可以对分布的当地数据进行区别对待,而且可以进行数据组合。在FedMCC中,一个经过改造的数据对口通过在线网络和目标网络,从而产生了四个可以确定损失的表达方式。因此,FedMCC产生的高质量表达方式可以优于现有几种由自己监督的线性评价和半监督学习任务学习方法。FedMCC可以通过我们所称的势头对比组合(MCC)很容易适应普通的集中组合。我们显示,CCC在某些数据集(如STL-10和图像网-10)中取得了最先进的组合准确性结果。我们还提出了减少我们集群计划的记忆足迹的方法。