The Conditional Neural Process (CNP) family of models offer a promising direction to tackle few-shot problems by achieving better scalability and competitive predictive performance. However, the current CNP models only capture the overall uncertainty for the prediction made on a target data point. They lack a systematic fine-grained quantification on the distinct sources of uncertainty that are essential for model training and decision-making under the few-shot setting. We propose Evidential Conditional Neural Processes (ECNP), which replace the standard Gaussian distribution used by CNP with a much richer hierarchical Bayesian structure through evidential learning to achieve epistemic-aleatoric uncertainty decomposition. The evidential hierarchical structure also leads to a theoretically justified robustness over noisy training tasks. Theoretical analysis on the proposed ECNP establishes the relationship with CNP while offering deeper insights on the roles of the evidential parameters. Extensive experiments conducted on both synthetic and real-world data demonstrate the effectiveness of our proposed model in various few-shot settings.
翻译:有条件神经过程模型(CNP)系列为通过实现更好的可缩放性和竞争性预测性,解决微小问题提供了有希望的方向,然而,目前的CNP模型只捕捉了目标数据点预测的总体不确定性,没有系统地微细量化在微粒环境下对模式培训和决策至关重要的不同不确定性来源;我们提议以高斯分布取代CNP所使用的标准条件神经过程(ECNP),以高斯分布取代CNP所使用的高斯结构,通过证据学习实现集中-代谢性不确定性脱钩,证据等级结构还导致对杂乱的培训任务产生理论上合理的稳健性;对拟议的ECNP的理论分析确定了与CNP的关系,同时对证据参数的作用提供了更深入的见解;对合成和实际世界数据进行的广泛实验表明了我们提出的模型在各种微粒环境中的有效性。