Neural Processes (NPs; Garnelo et al., 2018a,b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes. We provide a rigorous analysis of the standard maximum-likelihood objective used to train conditional NPs. Moreover, we propose a new member to the Neural Process family called the Gaussian Neural Process (GNP), which models predictive correlations, incorporates translation equivariance, provides universal approximation guarantees, and demonstrates encouraging performance.
翻译:神经过程(NPs;Garnelo等人,2018年a,b)是一组丰富的元学习模式,可以将数据集直接映射成预测性随机过程的数据集,我们对用于培训有条件NPs的标准最大相似性目标进行了严格分析。 此外,我们提议神经过程大家庭的新成员,称为Gausian神经过程(GNP),该过程将预测相关性模型纳入翻译等同性,提供通用近似保证,并展示令人鼓舞的业绩。