We present a real-time cloth animation method for dressing virtual humans of various shapes and poses. Our approach formulates the clothing deformation as a high-dimensional function of body shape parameters and pose parameters. In order to accelerate the computation, our formulation factorizes the clothing deformation into two independent components: the deformation introduced by body pose variation (Clothing Pose Model) and the deformation from body shape variation (Clothing Shape Model). Furthermore, we sample and cluster the poses spanning the entire pose space and use those clusters to efficiently calculate the anchoring points. We also introduce a sensitivity-based distance measurement to both find nearby anchoring points and evaluate their contributions to the final animation. Given a query shape and pose of the virtual agent, we synthesize the resulting clothing deformation by blending the Taylor expansion results of nearby anchoring points. Compared to previous methods, our approach is general and able to add the shape dimension to any clothing pose model. %and therefore it is more general. Furthermore, we can animate clothing represented with tens of thousands of vertices at 50+ FPS on a CPU. Moreover, our example database is more representative and can be generated in parallel, and thereby saves the training time. We also conduct a user evaluation and show that our method can improve a user's perception of dressed virtual agents in an immersive virtual environment compared to a conventional linear blend skinning method.
翻译:我们展示了一种实时布布动动画方法,以摆放各种形状和姿势的虚拟人。我们的方法将衣着变形作为身体形状参数和姿势的高度功能。为了加速计算,我们的配制因子将服装变形分为两个独立组成部分:身体变形产生变异(环形套件模型),身体变形产生变形(环形套件模型),身体变形产生变形(环形套件模型),此外,我们抽样和分组将整个变形成形构成空间,并使用这些组群有效计算锚点。我们还采用基于灵敏度的距离测量方法,以找到附近的固定点,并评估其对最终动画的贡献。为了加速计算,我们根据虚拟代理器的查询形状和外观,我们综合了由此形成的服装变形,将附近锚点的扩大结果混合在一起。与以往的方法相比,我们的方法是一般的,能够将形状的尺寸尺寸尺寸加入任何变形模型。%,因此它更为笼统。此外,我们可以在50+FPS上以数万的悬浮衣着点来进行测测测测测测测测测。此外,我们的一个模方法可以同时显示我们常规的模方法可以使用户的模化的模化的模化方法,从而显示一个比样的模化的模化的模化环境的模化的模化的模化方法可以显示。我们样样的模化的模化的模化的模化方法可以使的模。