The proliferation of edge networks creates islands of learning agents working on local streams of data. Transferring knowledge between these agents in real-time without exposing private data allows for collaboration to decrease learning time and increase model confidence. Incorporating knowledge from data that a local model did not see creates an ability to debias a local model or add to classification abilities on data never before seen. Transferring knowledge in a selective decentralized approach enables models to retain their local insights, allowing for local flavors of a machine learning model. This approach suits the decentralized architecture of edge networks, as a local edge node will serve a community of learning agents that will likely encounter similar data. We propose a method based on knowledge distillation for pairwise knowledge transfer pipelines from models trained on non-i.i.d. data and compare it to other popular knowledge transfer methods. Additionally, we test different scenarios of knowledge transfer network construction and show the practicality of our approach. Our experiments show knowledge transfer using our model outperforms standard methods in a real-time transfer scenario.
翻译:边际网络的扩展造就了在本地数据流上工作的学习代理商的岛屿。在不披露私人数据的情况下实时在这些代理商之间转让知识,有助于合作减少学习时间和增强模型信心。纳入从当地模型所看不到的数据中得出的知识,不会产生削弱当地模型的能力或增加数据分类能力的能力。通过选择性分散化方法转让知识,使模型能够保留其本地洞察力,允许机器学习模式的本地口味。这种方法适合边际网络的分散结构,因为本地边际节点将服务于可能遇到类似数据的学习代理商群体。我们提出了一个基于知识蒸馏的方法,用于从非i.i.d.数据培训的模型中提取双向知识传输管道,并将其与其他普通知识转让方法进行比较。此外,我们测试了知识转移网络建设的不同情景,并展示了我们方法的实用性。我们的实验显示,在实时转让假想中,利用我们的模型进行超越标准方法的技术转让。