This paper proposes an extension of principal component analysis for Gaussian process posteriors denoted by GP-PCA. Since GP-PCA estimates a low-dimensional space of GP posteriors, it can be used for meta-learning, which is a framework for improving the precision of a new task by estimating a structure of a set of tasks. The issue is how to define a structure of a set of GPs with an infinite-dimensional parameter, such as coordinate system and a divergence. In this study, we reduce the infiniteness of GP to the finite-dimensional case under the information geometrical framework by considering a space of GP posteriors that has the same prior. In addition, we propose an approximation method of GP-PCA based on variational inference and demonstrate the effectiveness of GP-PCA as meta-learning through experiments.
翻译:由于GP-PCA估计GP外表的低维空间,因此可以用于元学习,这是通过估计一组任务的结构来提高新任务精确度的一个框架。问题是如何界定一组具有无限维参数的GP的结构,例如协调系统和差异。在本研究中,我们通过考虑一个以前具有相同特性的GP外表空间,将GP的无限性降低到信息几何框架下的有限维体案例。此外,我们提议一种基于变式推断的GP-PCA近似法,并表明GP-PCA作为实验的元学习的有效性。