We present a framework for transfer learning based on modular variational Gaussian processes (GP). We develop a module-based method that having a dictionary of well fitted GPs, one could build ensemble GP models without revisiting any data. Each model is characterised by its hyperparameters, pseudo-inputs and their corresponding posterior densities. Our method avoids undesired data centralisation, reduces rising computational costs and allows the transfer of learned uncertainty metrics after training. We exploit the augmentation of high-dimensional integral operators based on the Kullback-Leibler divergence between stochastic processes to introduce an efficient lower bound under all the sparse variational GPs, with different complexity and even likelihood distribution. The method is also valid for multi-output GPs, learning correlations a posteriori between independent modules. Extensive results illustrate the usability of our framework in large-scale and multi-task experiments, also compared with the exact inference methods in the literature.
翻译:我们提出了一个基于模块变异高斯进程(GP)的转让学习框架。我们开发了一个基于模块的方法,该方法有一套完善的GP字典,可以在不重新审查任何数据的情况下建立混合的GP模型。每个模型的特征是超参数、伪投入及其相应的后表密度。我们的方法避免了不理想的数据集中,降低了计算成本,并允许在培训后转让所学的不确定性度量。我们利用高维集成操作器的扩增,其依据是Stochistic工艺之间的 Kullback- Leibel差异,以便在所有稀有变异的GPGP中引入高效的低约束,其复杂性甚至可能性分布也不同。这种方法也适用于多输出的GP,在独立模块之间学习后表的相关性。广泛的结果说明了我们框架在大规模和多任务实验中的可用性,也与文献中的精确推论方法相比较。