The ability to deal with uncertainty in machine learning models has become equally, if not more, crucial to their predictive ability itself. For instance, during the pandemic, governmental policies and personal decisions are constantly made around uncertainties. Targeting this, Neural Process Families (NPFs) have recently shone a light on prediction with uncertainties by bridging Gaussian processes and neural networks. Latent neural process, a member of NPF, is believed to be capable of modelling the uncertainty on certain points (local uncertainty) as well as the general function priors (global uncertainties). Nonetheless, some critical questions remain unresolved, such as a formal definition of global uncertainties, the causality behind global uncertainties, and the manipulation of global uncertainties for generative models. Regarding this, we build a member GloBal Convolutional Neural Process(GBCoNP) that achieves the SOTA log-likelihood in latent NPFs. It designs a global uncertainty representation p(z), which is an aggregation on a discretized input space. The causal effect between the degree of global uncertainty and the intra-task diversity is discussed. The learnt prior is analyzed on a variety of scenarios, including 1D, 2D, and a newly proposed spatial-temporal COVID dataset. Our manipulation of the global uncertainty not only achieves generating the desired samples to tackle few-shot learning, but also enables the probability evaluation on the functional priors.
翻译:处理机器学习模型的不确定性的能力,即使不是更加重要,也对其预测能力本身也变得同样重要。例如,在大流行病期间,政府政策和个人决定总是围绕不确定性作出。为此,神经过程家庭(NPFs)最近通过连接Gaussian进程和神经网络,展示了预测不确定性的灯光。作为NPF的成员,深层神经过程被认为能够模拟某些点(地方不确定性)的不确定性以及一般功能前置(全球不确定性)的不确定性。尽管如此,一些关键问题仍然没有得到解决,例如全球不确定性的正式定义、全球不确定性背后的因果关系以及全球不确定性的操纵。在这方面,我们建立了一个成员GloBal Convolucial Neal进程(GBCONP),在潜伏的NPFPs中实现SOTA日志相似性。它设计了一个全球不确定性代表p(z),这是分解输入空间的汇总。讨论全球不确定性的程度与任务内多样性之间的因果关系。之前所了解的关于功能变化的概率分析,只是先期的概率(包括1,2D)分析我们所期望的模型,而后期的C-D将最终生成。