In the last decade, many deep learning models have been well trained and made a great success in various fields of machine intelligence, especially for computer vision and natural language processing. To better leverage the potential of these well-trained models in intra-domain or cross-domain transfer learning situations, knowledge distillation (KD) and domain adaptation (DA) are proposed and become research highlights. They both aim to transfer useful information from a well-trained model with original training data. However, the original data is not always available in many cases due to privacy, copyright or confidentiality. Recently, the data-free knowledge transfer paradigm has attracted appealing attention as it deals with distilling valuable knowledge from well-trained models without requiring to access to the training data. In particular, it mainly consists of the data-free knowledge distillation (DFKD) and source data-free domain adaptation (SFDA). On the one hand, DFKD aims to transfer the intra-domain knowledge of original data from a cumbersome teacher network to a compact student network for model compression and efficient inference. On the other hand, the goal of SFDA is to reuse the cross-domain knowledge stored in a well-trained source model and adapt it to a target domain. In this paper, we provide a comprehensive survey on data-free knowledge transfer from the perspectives of knowledge distillation and unsupervised domain adaptation, to help readers have a better understanding of the current research status and ideas. Applications and challenges of the two areas are briefly reviewed, respectively. Furthermore, we provide some insights to the subject of future research.
翻译:在过去十年中,许多深层次学习模式都经过良好培训,在机器情报的各个领域取得了巨大成功,特别是在计算机视野和自然语言处理方面。为了更好地发挥这些经过良好培训的模式在内部或跨部传输学习情况中的潜力,提出了知识蒸馏(KD)和域适应(DA)的建议并成为研究重点。这两种模式的目的都是将有用的信息从经过良好培训的模型中转让,并附有原始培训数据。然而,由于隐私、版权或保密性,原始数据在很多情况下并非总能提供。最近,无数据知识转移模式吸引了人们的注意,因为它涉及从受过良好培训的模式中提取宝贵的知识,而无需获得培训数据。特别是,它主要包括无数据蒸馏(KD)和域适应(DA),以及源的无数据调整(SFDDA)。一方面,DKD旨在将原始数据的内部知识从一个繁琐的教师网络转移到一个精密的学生网络,用于模型压缩和高效的今后应用。另一方面,SDFDA的目标是重新利用经过良好培训的跨部域研究领域知识,我们从一个经过很好地储存的数据源。