We present a general framework for the garment animation problem through unsupervised deep learning inspired in physically based simulation. Existing trends in the literature already explore this possibility. Nonetheless, these approaches do not handle cloth dynamics. Here, we propose the first methodology able to learn realistic cloth dynamics unsupervisedly, and henceforth, a general formulation for neural cloth simulation. The key to achieve this is to adapt an existing optimization scheme for motion from simulation based methodologies to deep learning. Then, analyzing the nature of the problem, we devise an architecture able to automatically disentangle static and dynamic cloth subspaces by design. We will show how this improves model performance. Additionally, this opens the possibility of a novel motion augmentation technique that greatly improves generalization. Finally, we show it also allows to control the level of motion in the predictions. This is a useful, never seen before, tool for artists. We provide of detailed analysis of the problem to establish the bases of neural cloth simulation and guide future research into the specifics of this domain.
翻译:我们通过未经监督的深层学习,在物理模拟中为服装动画问题提出了一个总体框架。文献中的现有趋势已经探索了这种可能性。不过,这些方法并不处理布质动态。在这里,我们提出了第一个能够不受监督地学习现实的布质动态的方法,并且从今以后,提出了神经布质模拟的一般配方。实现这一点的关键是调整现有的优化计划,从模拟方法到深层学习。然后,分析问题的性质,我们设计出一个能够自动分离静态和动态布质子空间的架构。我们将展示这如何改善模型的性能。此外,这开启了一种新颖的运动增强技术的可能性,大大改进了一般化。最后,我们展示它也能控制预测中的运动水平。这是艺术家一个有用的工具,以前从未看到过。我们提供了对问题的详细分析,以建立神经布模拟的基础,并指导未来对这个领域的具体研究。