This work presents a generative modeling approach based on successive subspace learning (SSL). Unlike most generative models in the literature, our method does not utilize neural networks to analyze the underlying source distribution and synthesize images. The resulting method, called the progressive attribute-guided extendable robust image generative (PAGER) model, has advantages in mathematical transparency, progressive content generation, lower training time, robust performance with fewer training samples, and extendibility to conditional image generation. PAGER consists of three modules: core generator, resolution enhancer, and quality booster. The core generator learns the distribution of low-resolution images and performs unconditional image generation. The resolution enhancer increases image resolution via conditional generation. Finally, the quality booster adds finer details to generated images. Extensive experiments on MNIST, Fashion-MNIST, and CelebA datasets are conducted to demonstrate generative performance of PAGER.
翻译:这项工作提出了基于连续的子空间学习(SSL)的基因模型方法。与文献中的大多数基因模型不同,我们的方法没有利用神经网络分析原始源分布和合成图像。由此产生的方法被称为渐进式属性引导扩展强强图像基因模型(PAGER),在数学透明度、渐进式内容生成、低培训时间、稳健性性能和较少培训样本以及扩展至有条件图像生成方面有其优点。PAGER由三个模块组成:核心生成器、分辨率增强器和质量增强器。核心生成器学习低分辨率图像的分布并进行无条件图像生成。分辨率增强器通过有条件生成提高图像分辨率。最后,质量增强器为生成图像添加了更细的细节。对MNIST、Fashon-MNIST和CeebeA数据集进行了广泛的实验,以展示PAGER的基因化性能。