Recent approaches build on implicit neural representations (INRs) to propose generative models over function spaces. However, they are computationally costly when dealing with inference tasks, such as missing data imputation, or directly cannot tackle them. In this work, we propose a novel deep generative model, named VAMoH. VAMoH combines the capabilities of modeling continuous functions using INRs and the inference capabilities of Variational Autoencoders (VAEs). In addition, VAMoH relies on a normalizing flow to define the prior, and a mixture of hypernetworks to parametrize the data log-likelihood. This gives VAMoH a high expressive capability and interpretability. Through experiments on a diverse range of data types, such as images, voxels, and climate data, we show that VAMoH can effectively learn rich distributions over continuous functions. Furthermore, it can perform inference-related tasks, such as conditional super-resolution generation and in-painting, as well or better than previous approaches, while being less computationally demanding.
翻译:最近的方法建立在隐式神经表示(INR)上,以提出函数空间上的生成模型。然而,当处理缺失数据插值等推理任务时,它们在计算上代价高,或者根本无法处理。在本文中,我们提出了一种名为VAMoH的新型深度生成模型。VAMoH结合了使用INR对连续函数建模的能力和变分自动编码器(VAE)的推理能力。此外,VAMoH依赖于归一化流来定义先验,并使用超网络的混合来参数化数据对数似然。这使得VAMoH具有高度的表现力和解释性。通过在各种数据类型上的实验,如图像、体素和气候数据,我们展示了VAMoH可以有效地学习连续函数上的丰富分布。此外,它可以执行条件超分辨率生成和修补等推理相关任务,与以前的方法一样好,甚至更好,同时计算代价更小。