Cinemagraphs are short looping videos created by adding subtle motions to a static image. This kind of media is popular and engaging. However, automatic generation of cinemagraphs is an underexplored area and current solutions require tedious low-level manual authoring by artists. In this paper, we present an automatic method that allows generating human cinemagraphs from single RGB images. We investigate the problem in the context of dressed humans under the wind. At the core of our method is a novel cyclic neural network that produces looping cinemagraphs for the target loop duration. To circumvent the problem of collecting real data, we demonstrate that it is possible, by working in the image normal space, to learn garment motion dynamics on synthetic data and generalize to real data. We evaluate our method on both synthetic and real data and demonstrate that it is possible to create compelling and plausible cinemagraphs from single RGB images.
翻译:电影摄影机是通过在静态图像中添加微妙动作而创造的短环视频。 这种媒体很受欢迎和活跃。 但是,自动生成电影机是一个探索不足的领域,而当前的解决方案需要艺术家低层次手工创作。 在本文中,我们提出了一个允许从单一RGB图像中生成人类电影机能的自动方法。我们从风下的穿衣人的角度来调查问题。我们的方法的核心是一个新颖的循环神经网络,为目标环绕期制作环绕电影机能。为避免收集真实数据的问题,我们通过在图像正常空间工作,证明有可能学习合成数据中的服装动态,并概括真实数据。我们评估了我们合成和真实数据的方法,并证明有可能从单一RGB图像中生成有说服力和可信的电影机能。</s>