Meta-learning has proven to be successful for few-shot learning across the regression, classification, and reinforcement learning paradigms. Recent approaches have adopted Bayesian interpretations to improve gradient-based meta-learners by quantifying the uncertainty of the post-adaptation estimates. Most of these works almost completely ignore the latent relationship between the covariate distribution $(p(x))$ of a task and the corresponding conditional distribution $p(y|x)$. In this paper, we identify the need to explicitly model the meta-distribution over the task covariates in a hierarchical Bayesian framework. We begin by introducing a graphical model that leverages the samples from the marginal $p(x)$ to better infer the posterior over the optimal parameters of the conditional distribution $(p(y|x))$ for each task. Based on this model we propose a computationally feasible meta-learning algorithm by introducing meaningful relaxations in our final objective. We demonstrate the gains of our algorithm over initialization based meta-learning baselines on popular classification benchmarks. Finally, to understand the potential benefit of modeling task covariates we further evaluate our method on a synthetic regression dataset.
翻译:在回归、分类和强化学习范式中,模拟学习证明是成功的。最近一些方法采用了贝叶斯解释法,通过量化适应后估算的不确定性来改进梯度基元清除器。大多数这些方法几乎完全忽略了一项任务的共变分配$(p(x))与相应的有条件分配$p(y ⁇ x)之间的潜在关系。在本文件中,我们确定有必要明确模拟贝叶西亚等级框架中任务共变式的元分配。我们首先采用一个图形模型,利用边际的美元(x)样本来更好地推算每项任务有条件分配$(p(y ⁇ x)的最佳参数。根据这个模型,我们提出一个可行的元学习算法,在最终目标中引入有意义的放松。我们证明我们的算法比基于大众分类基准的元学习基线初始化的元化计算得益。最后,我们进一步评估了合成回归数据集方法的模型计算方法的潜在好处。