Generative models are typically trained on grid-like data such as images. As a result, the size of these models usually scales directly with the underlying grid resolution. In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions. We then build generative models by learning distributions over such functions. By treating data points as functions, we can abstract away from the specific type of data we train on and construct models that scale independently of signal resolution. To train our model, we use an adversarial approach with a discriminator that acts on continuous signals. Through experiments on both images and 3D shapes, we demonstrate that our model can learn rich distributions of functions independently of data type and resolution.
翻译:生成模型通常在图像等类似网格的数据上接受培训。 因此, 这些模型的大小通常与基本网格分辨率直接相称。 在本文中, 我们放弃离散网格, 代之以通过连续函数参数化单个数据点。 然后我们通过学习这些函数的分布来建立基因模型。 通过将数据点作为函数处理, 我们可以从我们所培训的具体类型的数据中抽取出来, 并构建独立于信号分辨率的尺度模型。 为了培训我们的模型, 我们使用对立方法, 歧视者对连续信号采取行动。 通过对图像和3D形状的实验, 我们证明我们的模型可以学习独立于数据类型和分辨率的丰富功能分布 。