The overall predictive uncertainty of a trained predictor can be decomposed into separate contributions due to epistemic and aleatoric uncertainty. Under a Bayesian formulation, assuming a well-specified model, the two contributions can be exactly expressed (for the log-loss) or bounded (for more general losses) in terms of information-theoretic quantities (Xu and Raginsky, 2020). This paper addresses the study of epistemic uncertainty within an information-theoretic framework in the broader setting of Bayesian meta-learning. A general hierarchical Bayesian model is assumed in which hyperparameters determine the per-task priors of the model parameters. Exact characterizations (for the log-loss) and bounds (for more general losses) are derived for the epistemic uncertainty - quantified by the minimum excess meta-risk (MEMR)- of optimal meta-learning rules. This characterization is leveraged to bring insights into the dependence of the epistemic uncertainty on the number of tasks and on the amount of per-task training data. Experiments are presented that compare the proposed information-theoretic bounds, evaluated via neural mutual information estimators, with the performance of a novel approximate fully Bayesian meta-learning strategy termed Langevin-Stein Bayesian Meta-Learning (LS-BML).
翻译:受过训练的预测者的总体预测不确定性可分解成由于缩写和偏移不确定性造成的单独贡献。根据一种巴伊西亚公式,假设一种精确的模型,两种贡献可以精确表达(对日志损失)或约束(对更一般的损失)信息-理论数量(Xu和Raginsky,2020年) 。本文件论述在更广泛的拜伊西亚元学习环境中对信息-理论框架内的缩写不确定性的研究。假设一种一般的贝伊西亚等级模型,其中超分度计决定模型参数的每任务前缀。从缩写(对日志损失)和界限(对更一般的损失)中可以精确表达(对日志损失)或界限(对更一般的损失),根据最优的代谢规则的最低超超值(MEMR)进行量化。这种定性有助于深入了解对任务数量和每任务培训量的缩写不确定性的依赖性。实验显示,将拟议的信息-代谢性战略的精确度 — 代谢性战略 — 模拟性战略 — 模拟性战略 — 模拟 — 模拟性研究。