When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks. However, existing methods have unreliable uncertainty estimates which are often overconfident. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines.
翻译:当数据是稀缺的元学习时,通过利用相关学习任务以往的经验,可以提高学习者的准确性。然而,现有方法的不确定性估计不可靠,而且往往过于自信。解决这些缺陷,我们引入了名为F-PACOH的新颖的元学习框架,将元学前科作为随机过程处理,并在功能空间直接进行元水平的正规化。这使我们能够直接引导对元流的概率预测,使其在元培训数据不足的地区达到高度的共知性不确定性,从而获得经充分校准的不确定性估计。最后,我们展示了如何将我们的方法与连续决策相结合,在可靠的不确定性量化是必要的地方。在我们关于巴耶斯最佳化(OBO)、F-PACOH的元学习基准研究中,F-PACOH大大超越了所有其他元学习数据和标准基线。