Meta learning automatically infers an inductive bias, that includes the hyperparameter of the base-learning algorithm, by observing data from a finite number of related tasks. This paper studies PAC-Bayes bounds on meta generalization gap. The meta-generalization gap comprises two sources of generalization gaps: the environment-level and task-level gaps resulting from observation of a finite number of tasks and data samples per task, respectively. In this paper, by upper bounding arbitrary convex functions, which link the expected and empirical losses at the environment and also per-task levels, we obtain new PAC-Bayes bounds. Using these bounds, we develop new PAC-Bayes meta-learning algorithms. Numerical examples demonstrate the merits of the proposed novel bounds and algorithm in comparison to prior PAC-Bayes bounds for meta-learning.
翻译:元数据学习自动推断出一种感应偏差,其中包括基础学习算法的超常参数,通过观察来自一定数量相关任务的数据。本文研究PAC-Bayes对元普遍性差距的界限。元普遍性差距包括两个一般化差距的来源:分别由于观测每个任务的有限任务和数据样本而导致的环境层面和任务层面的差距。在本文中,通过上层任意连接的任意二次曲线功能,将环境的预期损失和实验损失以及每个任务层面联系起来,我们获得了新的PAC-Bayes界限。我们利用这些界限,开发了新的PAC-Bayes元学习算法。数字实例表明,与先前的PAC-Bayes元学习链相比,拟议的新界限和算法的优点。