For highly distributed environments such as edge computing, collaborative learning approaches eschew the dependence on a global, shared model, in favor of models tailored for each location. Creating tailored models for individual learning contexts reduces the amount of data transfer, while collaboration among peers provides acceptable model performance. Collaboration assumes, however, the availability of knowledge transfer mechanisms, which are not trivial for deep learning models where knowledge isn't easily attributed to precise model slices. We present Canoe - a framework that facilitates knowledge transfer for neural networks. Canoe provides new system support for dynamically extracting significant parameters from a helper node's neural network and uses this with a multi-model boosting-based approach to improve the predictive performance of the target node. The evaluation of Canoe with different PyTorch and TensorFlow neural network models demonstrates that the knowledge transfer mechanism improves the model's adaptiveness to changes up to 3.5X compared to learning in isolation, while affording several magnitudes reduction in data movement costs compared to federated learning.
翻译:对于高度分布的环境,如边缘计算,合作学习方法避免依赖全球共享模型,而倾向于为每个地点量身定制的模式。为个人学习环境建立量身定制的模式会减少数据传输量,而同龄人之间的协作则提供可接受的模型性业绩。然而,合作假设了知识转让机制的可用性,而对于知识不易被精确模型切片所归结的深层次学习模式来说,这种机制并非微不足道。我们提出了“独木舟”——一个便利神经网络知识转让的框架。独木舟为动态地从帮助节点神经网络提取重要参数提供了新的系统支持,并用多模型推进法使用这一模式来改进目标节点的预测性能。不同PyTorrch和TensorFlow神经网络模型对Canoe的评估表明,知识转让机制提高了模型适应性,使其与孤立学习相比,可调整到3.5X,同时与联邦化学习相比,数据移动成本可以降低若干倍。