This paper presents a novel learning-based clothing deformation method to generate rich and reasonable detailed deformations for garments worn by bodies of various shapes in various animations. In contrast to existing learning-based methods, which require numerous trained models for different garment topologies or poses and are unable to easily realize rich details, we use a unified framework to produce high fidelity deformations efficiently and easily. To address the challenging issue of predicting deformations influenced by multi-source attributes, we propose three strategies from novel perspectives. Specifically, we first found that the fit between the garment and the body has an important impact on the degree of folds. We then designed an attribute parser to generate detail-aware encodings and infused them into the graph neural network, therefore enhancing the discrimination of details under diverse attributes. Furthermore, to achieve better convergence and avoid overly smooth deformations, we proposed output reconstruction to mitigate the complexity of the learning task. Experiment results show that our proposed deformation method achieves better performance over existing methods in terms of generalization ability and quality of details.
翻译:本文介绍了一种新的基于学习的衣着变形方法,以产生丰富和合理的详细变形方法,处理不同形体在各种动画中穿戴的衣着变形。与现有的以学习为基础的方法不同,这些方法要求许多经过训练的不同服装形态或装饰模型,而且无法轻易实现丰富的细节,我们使用一个统一框架来产生高度忠诚变形,既高效又容易地产生高度忠诚变形。为了解决预测受多种来源属性影响的变形这一具有挑战性的问题,我们从新角度提出了三项战略。具体地说,我们首先发现服装和体体体的适合性对折叠程度有重要影响。然后我们设计了一个属性分析器,以生成详细觉悟的编码并将其输入到图形神经网络,从而强化了对不同属性下细节的区分。此外,为了实现更好的趋同,避免过于顺利的变形,我们建议调整产出,以减轻学习任务的复杂性。实验结果表明,我们提议的变形方法在一般化能力和细节质量方面比现有方法取得更好的表现。