Artistic painting has achieved significant progress during recent years by applying hundreds of GAN variants. However, adversarial training has been reported to be notoriously unstable and can lead to mode collapse. Recently, diffusion models have achieved GAN-level sample quality without adversarial training. Using autoencoders to project the original images into compressed latent spaces and cross attention enhanced U-Net as the backbone of diffusion, latent diffusion models have achieved stable and high fertility image generation. In this paper, we focus on enhancing the creative painting ability of current latent diffusion models in two directions, textual condition extension and model retraining with Wikiart dataset. Through textual condition extension, users' input prompts are expanded in temporal and spacial directions for deeper understanding and explaining the prompts. Wikiart dataset contains 80K famous artworks drawn during recent 400 years by more than 1,000 famous artists in rich styles and genres. Through the retraining, we are able to ask these artists to draw novel and creative painting on modern topics.
翻译:近些年来,通过应用数百种GAN变体,艺术绘画取得了显著进步。然而,据报告,对抗性培训臭名昭著地不稳定,可能导致模式崩溃。最近,扩散模型在没有对抗性培训的情况下,实现了GAN级样本质量。利用自动编码器将原始图像投射到压缩的潜层空间,并交叉关注增强U-Net作为传播的支柱,潜在传播模型实现了稳定高生育率的图像生成。在本文中,我们侧重于提高当前潜伏传播模型的创造性绘画能力,分两个方向,即文本状态扩展和用维基艺术数据集进行模型再培训。通过文本条件扩展,用户输入提示在时间和和平方向上得到扩展,以加深理解和解释提示。维基亚数据集包含近400年来由1 000多名富有风格和风格的著名艺术家绘制的80K种著名艺术品。通过再培训,我们可以要求这些艺术家在现代专题上画出新颖和创造性的绘画。