Current Dynamic Texture Synthesis (DyTS) models can synthesize realistic videos. However, they require a slow iterative optimization process to synthesize a single fixed-size short video, and they do not offer any post-training control over the synthesis process. We propose Dynamic Neural Cellular Automata (DyNCA), a framework for real-time and controllable dynamic texture synthesis. Our method is built upon the recently introduced NCA models and can synthesize infinitely long and arbitrary-sized realistic video textures in real time. We quantitatively and qualitatively evaluate our model and show that our synthesized videos appear more realistic than the existing results. We improve the SOTA DyTS performance by $2\sim 4$ orders of magnitude. Moreover, our model offers several real-time video controls including motion speed, motion direction, and an editing brush tool. We exhibit our trained models in an online interactive demo that runs on local hardware and is accessible on personal computers and smartphones.
翻译:目前的动态纹理合成技术可以合成逼真的视频,但是它们需要缓慢的迭代优化过程来合成一个短小的固定大小视频,并且不提供合成过程的任何后期控制。我们提出了动态神经细胞自动机(DyNCA),这是一种用于实时可控动态纹理合成的框架。我们的方法建立在最近介绍的NCA模型上,并能够实时合成无限长和任意大小的逼真视频纹理。我们进行了量化和定性评估,并表明我们的模型合成的视频比现有结果更逼真。我们提高了SOTA DyTS性能2到4个数量级。此外,我们的模型提供了几种实时视频控件,包括运动速度、运动方向和编辑画笔工具。我们在一个在线交互式演示中展示了我们的训练模型,该演示可在本地硬件上运行,并可以在个人计算机和智能手机上访问。