This paper proposes a multi-grid method for learning energy-based generative ConvNet models of images. For each grid, we learn an energy-based probabilistic model where the energy function is defined by a bottom-up convolutional neural network (ConvNet or CNN). Learning such a model requires generating synthesized examples from the model. Within each iteration of our learning algorithm, for each observed training image, we generate synthesized images at multiple grids by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of the training image. The synthesized image at each subsequent grid is obtained by a finite-step MCMC initialized from the synthesized image generated at the previous coarser grid. After obtaining the synthesized examples, the parameters of the models at multiple grids are updated separately and simultaneously based on the differences between synthesized and observed examples. We show that this multi-grid method can learn realistic energy-based generative ConvNet models, and it outperforms the original contrastive divergence (CD) and persistent CD.
翻译:本文提出一种多格方法,用于学习以能源为基础的ConvNet图像模型。 对于每个网格,我们学习一种以能源为基础的概率模型,其中能源功能是由一个自下而上进的神经神经网络(ConvNet或CNN)定义的。学习这样一个模型需要从模型中产生综合实例。在我们每个观测到的培训图像的学习算法的每一次迭代中,我们通过从最低限度的1x1版本的培训图像中初始化的有限步骤 MMC 取样,在多个网格上生成综合图像。每个网格的合成图像由从先前粗略网格中生成的综合图像中初始的有限步骤 MMC 获得。在获得综合示例后,多个网格的模型参数根据综合和观察实例之间的差异分别和同时更新。我们表明,这种多格方法可以学习现实的以能源为基础的ConvNet模型,它比原始的对比差异(CD)和持久性的CD。