The meta learning few-shot classification is an emerging problem in machine learning that received enormous attention recently, where the goal is to learn a model that can quickly adapt to a new task with only a few labeled data. We consider the Bayesian Gaussian process (GP) approach, in which we meta-learn the GP prior, and the adaptation to a new task is carried out by the GP predictive model from the posterior inference. We adopt the Laplace posterior approximation, but to circumvent the iterative gradient steps for finding the MAP solution, we introduce a novel linear discriminant analysis (LDA) plugin as a surrogate for the MAP solution. In essence, the MAP solution is approximated by the LDA estimate, but to take the GP prior into account, we adopt the prior-norm adjustment to estimate LDA's shared variance parameters, which ensures that the adjusted estimate is consistent with the GP prior. This enables closed-form differentiable GP posteriors and predictive distributions, thus allowing fast meta training. We demonstrate considerable improvement over the previous approaches.
翻译:元学少片分类是机器学习中出现的一个新问题,它最近受到极大关注,目标是学习一种模型,能够迅速适应仅贴上标签的数据的新任务。我们考虑了巴伊西亚高斯进程(GP)方法,在这个方法中,我们先将GP删除,然后由事后推断的GP预测模型对新任务进行调整。我们采用了Laplace 后方近似,但为了绕过迭代梯度步骤寻找MAP解决方案,我们引入了一个新颖的线性分析插件(LDA)作为MAP解决方案的替代工具。基本上,LDA估计接近了MA方案解决方案,但考虑到GP之前的估算,我们采用了对LDA共同差异参数的估计的前温调整,以确保调整的估计数与GP之前的GP一致。这样可以使封闭式不同的GP后方和预测分布能够进行快速的元培训。我们展示了以往方法的显著改进。