In this paper, we propose a Similarity-based Decentralized Knowledge Distillation (SD-Dist) framework for collaboratively learning heterogeneous deep models on decentralized devices. By introducing a preloaded reference dataset, SD-Dist enables all participant devices to identify similar users and distil knowledge from them without any assumptions on a fixed model architecture. In addition, none of these operations will reveal any sensitive information like personal data and model parameters. Extensive experimental results on three real-life datasets show that SD-Dist can achieve competitive performance with less compute resources, while ensuring model heterogeneity and privacy. As revealed in our experiments, our framework also enhances the resultant models' robustness when users' data is sparse and diverse.
翻译:在本文中,我们提出了一个基于相似的分散知识蒸馏(SD-Dist)框架(SD-Dist)框架(SD-Dist)框架(SD-Dist)框架(SD-Dist),用于合作学习分散装置的多样化深层模型。SD-Dist(SD-Dist)框架(SD-Dist)通过引入预先加载的参考数据集(SD-Dist),使所有参与者设备能够识别相似的用户,并在没有固定模型结构的任何假设的情况下从他们那里提取知识。此外,这些操作中没有任何一项能揭示任何敏感信息,如个人数据和模型参数。 三个真实数据集的广泛实验结果表明,SD-Det(SD-Dist)能够以较少的计算资源实现竞争性的性能,同时确保模型的异性和隐私性。正如我们的实验所揭示的那样,我们的框架也会在用户数据稀少和多样化时增强结果模型的可靠性。