Decentralized learning has been advocated and widely deployed to make efficient use of distributed datasets, with an extensive focus on supervised learning (SL) problems. Unfortunately, the majority of real-world data are unlabeled and can be highly heterogeneous across sources. In this work, we carefully study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL), specifically contrastive visual representation learning. We study the effectiveness of a range of contrastive learning algorithms under decentralized learning settings, on relatively large-scale datasets including ImageNet-100, MS-COCO, and a new real-world robotic warehouse dataset. Our experiments show that the decentralized SSL (Dec-SSL) approach is robust to the heterogeneity of decentralized datasets, and learns useful representation for object classification, detection, and segmentation tasks. This robustness makes it possible to significantly reduce communication and reduce the participation ratio of data sources with only minimal drops in performance. Interestingly, using the same amount of data, the representation learned by Dec-SSL can not only perform on par with that learned by centralized SSL which requires communication and excessive data storage costs, but also sometimes outperform representations extracted from decentralized SL which requires extra knowledge about the data labels. Finally, we provide theoretical insights into understanding why data heterogeneity is less of a concern for Dec-SSL objectives, and introduce feature alignment and clustering techniques to develop a new Dec-SSL algorithm that further improves the performance, in the face of highly non-IID data. Our study presents positive evidence to embrace unlabeled data in decentralized learning, and we hope to provide new insights into whether and why decentralized SSL is effective.
翻译:为了高效使用分布式数据集,倡导并广泛应用分散式学习,广泛侧重于监督式学习(SL)问题。不幸的是,大多数真实世界数据没有标签,而且来源之间差异很大。在这项工作中,我们通过自我监督学习(SSL),特别是对比式视觉代表学习,认真研究分散式学习,用非标签式数据进行分散式学习,通过自我监督学习(SSL)的镜头,具体地说是对比式的视觉代表学习。我们研究了分散式学习环境中一系列对比式学习算法的有效性,涉及规模较大的数据集,包括图像网-100、MS-COCO,以及一个新的真实世界机器人仓库数据集。我们的实验表明,分散式的SSL(Dec-SSL)方法对于分散式数据集的异质性非常强,我们通过自我监督、检测和分解式任务来学习有用的数据。这种稳健性能使得通信和数据源的参与率大大降低,而业绩下降幅度最小。有趣的是,使用同样的数据比例,Dec-SL所学的不透明性代表制,不仅与中央化的SL(SL)所学到的面面面面的不相近似,为什么需要通信和高级数据存储式数据显示我们对数据的数据的升级式的数据,有时需要获取性的数据转换式数据,我们对立式数据进行真正的数据转换式的数据分析。</s>