The outpainting results produced by existing approaches are often too random to meet users' requirement. In this work, we take the image outpainting one step forward by allowing users to harvest personal custom outpainting results using sketches as the guidance. To this end, we propose an encoder-decoder based network to conduct sketch-guided outpainting, where two alignment modules are adopted to impose the generated content to be realistic and consistent with the provided sketches. First, we apply a holistic alignment module to make the synthesized part be similar to the real one from the global view. Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones using a sketch alignment module. In this way, the learned generator will be imposed to pay more attention to fine details and be sensitive to the guiding sketches. To our knowledge, this work is the first attempt to explore the challenging yet meaningful conditional scenery image outpainting. We conduct extensive experiments on two collected benchmarks to qualitatively and quantitatively validate the effectiveness of our approach compared with the other state-of-the-art generative models.
翻译:现有方法产生的外涂结果往往过于随机,无法满足用户的要求。 在这项工作中,我们通过让用户使用素描来获取个人自定义外涂结果,将图像外涂出一步。 为此,我们提议建立一个基于编码器解码器的网络,进行素描制外涂图,其中采用两个调整模块来强制输入生成的内容,使之符合所提供的素描。首先,我们应用一个整体调整模块,使合成部分与从全球角度看真实部分相类似。第二,我们从合成部分中反向制作素描图,并鼓励它们与地面图谱相一致,使用一个素描校准模块。这样,将要求学习的生成者更多地关注精细细节,并敏感地注意指导图谱。据我们所知,这项工作是首次尝试探索具有挑战性但又有意义的有条件的场景图像外涂图。我们在两个收集到的基准上进行了广泛的实验,以便从质量和数量上验证我们的方法与其他状态的基因模型相比的有效性。