Meta-Learning promises to enable more data-efficient inference by harnessing previous experience from related learning tasks. While existing meta-learning methods help us to improve the accuracy of our predictions in face of data scarcity, they fail to supply reliable uncertainty estimates, often being grossly overconfident in their predictions. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines. Even in a challenging lifelong BO setting, where optimization tasks arrive one at a time and the meta-learner needs to build up informative prior knowledge incrementally, our proposed method demonstrates strong positive transfer.
翻译:虽然现有的元学习方法帮助我们在数据匮乏的情况下提高预测的准确性,但它们未能提供可靠的不确定性估计,往往在预测中过于自信。 解决这些缺陷,我们引入了一个新的元学习框架,称为F-PACOH,将元学前科作为随机过程处理,并在功能空间直接进行元水平规范化。这使我们能够直接引导元中继器预测在元培训数据不足的地区走向高度的不确定性,从而获得准确的不确定性估计。最后,我们展示了我们的方法如何与顺序决策相结合,而可靠的不确定性量化势在必行。在为Bayesian Opitimization(BO)进行的元学习基准研究中,F-PACOH大大超越了所有其他元中继器和标准基线。即使在具有挑战性的终身BO设置中,优化任务在某个时间到达一个阶段,而元中继器则展示了我们逐步增长的知识转移。