Generative models are typically trained on grid-like data such as images. As a result, the size of these models usually scales directly with the underlying grid resolution. In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions. We then build generative models by learning distributions over such functions. By treating data points as functions, we can abstract away from the specific type of data we train on and construct models that are agnostic to discretization. To train our model, we use an adversarial approach with a discriminator that acts on continuous signals. Through experiments on a wide variety of data modalities including images, 3D shapes and climate data, we demonstrate that our model can learn rich distributions of functions independently of data type and resolution.
翻译:生成模型通常就图像等类似网格的数据进行培训。 因此, 这些模型的大小通常与基本网格分辨率直接相称。 在本文中, 我们放弃离散的网格, 代之以连续函数参数化的单个数据点。 然后我们通过学习这些功能的分布来建立基因模型。 通过将数据点作为函数处理, 我们可以从我们所培训和构建的对离散具有不可知性的模型的具体类型的数据中抽取出来。 为了培训我们的模型, 我们使用对立方法, 与一个对连续信号采取行动的区分器。 通过对各种数据模式的实验, 包括图像、 3D 形状和气候数据, 我们证明我们的模型可以学习独立于数据类型和分辨率的丰富功能分布。