Current Dynamic Texture Synthesis (DyTS) models in the literature can synthesize realistic videos. However, these methods require a slow iterative optimization process to synthesize a single fixed-size short video, and they do not offer any post-training control over the synthesis process. We propose Dynamic Neural Cellular Automata (DyNCA), a framework for real-time and controllable dynamic texture synthesis. Our method is built upon the recently introduced NCA models, and can synthesize infinitely-long and arbitrary-size realistic texture videos in real-time. We quantitatively and qualitatively evaluate our model and show that our synthesized videos appear more realistic than the existing results. We improve the SOTA DyTS performance by $2\sim 4$ orders of magnitude. Moreover, our model offers several real-time and interactive video controls including motion speed, motion direction, and an editing brush tool.
翻译:文献中的动态质素合成模型(DyTS)可以综合现实的视频。然而,这些方法需要缓慢的迭代优化程序来合成一个单一固定尺寸的短视频,它们不会为合成过程提供任何培训后控制。我们提议了动态神经细胞自动模型(DyNCA),这是一个实时和可控动态质素合成的框架。我们的方法建立在最近引入的NCA模型上,可以实时合成无限长和任意大小的现实质素视频。我们从数量上和质量上评估了我们的模型,并显示我们合成的视频比现有结果更符合现实。我们用2\sim 4美元数量级改进SOTA DyTS的性能。此外,我们的模型提供了若干实时和交互式视频控制,包括运动速度、运动方向和编辑笔迹工具。