In this paper, we propose a graph-based image-to-image translation framework for generating images. We use rich data collected from the popular creativity platform Artbreeder (http://artbreeder.com), where users interpolate multiple GAN-generated images to create artworks. This unique approach of creating new images leads to a tree-like structure where one can track historical data about the creation of a particular image. Inspired by this structure, we propose a novel graph-to-image translation model called Graph2Pix, which takes a graph and corresponding images as input and generates a single image as output. Our experiments show that Graph2Pix is able to outperform several image-to-image translation frameworks on benchmark metrics, including LPIPS (with a 25% improvement) and human perception studies (n=60), where users preferred the images generated by our method 81.5% of the time. Our source code and dataset are publicly available at https://github.com/catlab-team/graph2pix.
翻译:在本文中, 我们提出一个基于图形的图像到图像翻译框架来生成图像。 我们使用从流行的创意平台Artbreeder( http:// artbreeder.com) 收集的丰富数据, 用户在此平台内对多个 GAN 生成的图像进行内插, 以创建艺术作品 。 这种创建新图像的独特方法导致一种树形结构, 人们可以在此结构下追踪关于创建特定图像的历史数据。 受此结构的启发, 我们提议一个名为Greaph2Pix 的新颖的图形到图像翻译模型, 它将一个图形和相应的图像作为输入, 并生成一个单一图像作为输出。 我们的实验显示, 图形2Pix 能够超越一些基准参数的图像到图像翻译框架, 包括 LPIPS ( 25%的改进率) 和 人类感知研究 ( n=60), 用户更喜欢用我们的方法生成的图像。 我们的源代码和数据集可以在 https://github.com/ catlab-team/ graphal2pix 上公开查阅 。