The overall predictive uncertainty of a trained predictor can be decomposed into separate contributions due to epistemic and aleatoric uncertainty. Under a Bayesian formulation, assuming a well-specified model, the two contributions can be exactly expressed (for the log-loss) or bounded (for more general losses) in terms of information-theoretic quantities (Xu and Raginsky, 2020). This paper addresses the study of epistemic uncertainty within an information-theoretic framework in the broader setting of Bayesian meta-learning. A general hierarchical Bayesian model is assumed in which hyperparameters determine the per-task priors of the model parameters. Exact characterizations (for the log-loss) and bounds (for more general losses) are derived for the epistemic uncertainty -quantified by the minimum excess meta-risk (MEMR)- of optimal meta-learning rules. This characterization is leveraged to bring insights into the dependence of the epistemic uncertainty on the number of tasks and on the amount of per-task training data. Experiments are presented that use the proposed information-theoretic bounds, evaluated via neural mutual information estimators, to compare the performance of conventional learning and meta-learning as the number of meta-learning tasks increases.
翻译:受过训练的预测者的总体预测不确定性可分解成因缩写和偏移不确定性而产生的单独贡献。根据一种巴伊西亚公式,假设一个明确的模型,两种贡献可以精确表达(对日志损失)或从信息-理论数量(Xu和Raginsky,2020年)来看(对更一般性损失)),本文论述在更广泛的巴伊西亚元学习环境中对信息-理论框架内的缩写不确定性的研究。一种一般等级的巴伊西亚模型假设,即超参数确定模型参数的每个任务前缀。精确特征(对日志损失)和界限(对更一般性损失)是用来推断出不确定性的缩写(对最佳元学习规则的最低超超值(MEMR)的缩写。这种定性有助于深入了解对任务数量和对每个任务培训数据数量的缩略性不确定性的依赖性。实验(对日志损失)和界限(对更一般性损失)的缩略图(对日志)是用来推断不确定性的缩略图(对常规信息学习模式的缩略图进行对比学习,通过模型进行实验,对常规信息学习的缩略图的缩进式研究。