By leveraging experience from previous tasks, meta-learning algorithms can achieve effective fast adaptation ability when encountering new tasks. However it is unclear how the generalization property applies to new tasks. Probably approximately correct (PAC) Bayes bound theory provides a theoretical framework to analyze the generalization performance for meta-learning. We derive three novel generalisation error bounds for meta-learning based on PAC-Bayes relative entropy bound. Furthermore, using the empirical risk minimization (ERM) method, a PAC-Bayes bound for meta-learning with data-dependent prior is developed. Experiments illustrate that the proposed three PAC-Bayes bounds for meta-learning guarantee a competitive generalization performance guarantee, and the extended PAC-Bayes bound with data-dependent prior can achieve rapid convergence ability.
翻译:通过利用以往任务的经验,元学习算法在遇到新任务时能够实现有效的快速适应能力,但不清楚一般化属性如何适用于新任务。大概大致正确(PAC) Bayes约束理论为分析元学习的一般化绩效提供了一个理论框架。我们从PAC-Bayes相对加密中得出三个新颖的概括化误差,用于基于PAC-Bayes相对加密的元化学习。此外,利用经验风险最小化(ERM)方法,开发了一种PAC-Bayes模式,以基于数据的元学习为主,以基于数据为主的元学习为主。实验表明,拟议的三个PAC-Bayes模式保证了竞争性的一般化绩效保证,而扩展的PAC-Bayes模式与基于数据的先期连接能够迅速达到趋同能力。