Text-to-Image synthesis is the task of generating an image according to a specific text description. Generative Adversarial Networks have been considered the standard method for image synthesis virtually since their introduction; today, Denoising Diffusion Probabilistic Models are recently setting a new baseline, with remarkable results in Text-to-Image synthesis, among other fields. Aside its usefulness per se, it can also be particularly relevant as a tool for data augmentation to aid training models for other document image processing tasks. In this work, we present a latent diffusion-based method for styled text-to-text-content-image generation on word-level. Our proposed method manages to generate realistic word image samples from different writer styles, by using class index styles and text content prompts without the need of adversarial training, writer recognition, or text recognition. We gauge system performance with Frechet Inception Distance, writer recognition accuracy, and writer retrieval. We show that the proposed model produces samples that are aesthetically pleasing, help boosting text recognition performance, and gets similar writer retrieval score as real data.
翻译:文本到图像的合成是根据特定文本描述生成图像的任务。生成对抗网络(GAN)自其引入以来一直被认为是图像合成的标准方法;今天,去噪扩散概率模型正在最近建立新的基准,在文本到图像合成等领域取得了卓越的结果。除了其本身的有用性外,它还可以作为数据增强的工具,在其他文档图像处理任务的训练模型中提供帮助。在这项工作中,我们提出了一种基于潜在扩散的方法,用于字级别的样式文本到文本内容图像生成。我们的所提出的方法通过使用类索引样式和文本内容提示来管理来自不同作者风格的逼真文字图像样品,而无需进行对抗训练、作家识别或文本识别。我们通过Frechet Inception Distance、作家识别准确性和作家检索来衡量系统性能。我们展示了所提出的模型产生的样品在审美上令人愉悦,有助于提高文本识别效果,并得到了与真实数据相似的作者检索分数。