This paper considers the problem of Bayesian transfer learning-based knowledge fusion between linear state-space processes driven by uniform state and observation noise processes. The target task conditions on probabilistic state predictor(s) supplied by the source filtering task(s) to improve its own state estimate. A joint model of the target and source(s) is not required and is not elicited. The resulting decision-making problem for choosing the optimal conditional target filtering distribution under incomplete modelling is solved via fully probabilistic design (FPD), i.e. via appropriate minimization of Kullback-Leibler divergence (KLD). The resulting FPD-optimal target learner is robust, in the sense that it can reject poor-quality source knowledge. In addition, the fact that this Bayesian transfer learning (BTL) scheme does not depend on a model of interaction between the source and target tasks ensures robustness to the misspecification of such a model. The latter is a problem that affects conventional transfer learning methods. The properties of the proposed BTL scheme are demonstrated via extensive simulations, and in comparison with two contemporary alternatives.
翻译:本文审议了由统一状态和观测噪音过程驱动的线性状态-空间进程之间基于学习的转移知识融合问题。源过滤任务提供的概率状态预测器的目标任务条件,目的是改进其自身的状态估计。目标和源的联合模型是不需要的,也没有产生。因此,在不完全的模型下选择最佳有条件的有条件筛选分布的决策问题通过完全概率设计(FPD)来解决,即通过适当尽量减少 Kullback-Liperr差异(KLD)解决。由此产生的FPD-最佳目标学习器是健全的,因为它可以拒绝低质量的来源知识。此外,这种巴伊斯转移学习(BTL)计划并不取决于源和目标任务之间的相互作用模式,它确保了这种模型的错误特性。后者是影响常规转让学习方法的一个问题。拟议的BTL计划的特性通过广泛的模拟得到证明,并与当代两种替代方法进行比较。