Despite recent progress in developing animatable full-body avatars, realistic modeling of clothing - one of the core aspects of human self-expression - remains an open challenge. State-of-the-art physical simulation methods can generate realistically behaving clothing geometry at interactive rate. Modeling photorealistic appearance, however, usually requires physically-based rendering which is too expensive for interactive applications. On the other hand, data-driven deep appearance models are capable of efficiently producing realistic appearance, but struggle at synthesizing geometry of highly dynamic clothing and handling challenging body-clothing configurations. To this end, we introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data. The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at train time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry. Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations. We conduct a thorough evaluation of our model and demonstrate diverse animation results on several subjects and different types of clothing. Unlike previous work on photorealistic full-body avatars, our approach can produce much richer dynamics and more realistic deformations even for loose clothing. We also demonstrate that our formulation naturally allows clothing to be used with avatars of different people while staying fully animatable, thus enabling, for the first time, photorealistic avatars with novel clothing.
翻译:尽管最近在开发可想象的全体外观方面有所进展,但实实在在的服装模型 — — 人类自我表现的核心方面之一 — — 仍然是一项公开的挑战。 最先进的物理模拟方法能够以互动速度产生实事求是的衣着几何几何学。 模拟光现实外观通常需要基于物理的外观,这对互动应用来说太昂贵。 另一方面,数据驱动的深色外观模型能够高效地产生现实的外观,但是在综合高度动态服装的几何学和处理具有挑战性的身体衣着配置方面挣扎不休。 为此,我们引入了由外观驱动的外观,以明确的服装模型显示现实的服装动态和从现实世界数据中学习的摄影真实真实现实外观。 关键的想法是引入一个以直观的外观外观外观外观外观模型:在火车时,我们使用高纤维的跟踪,而在动动画时,我们依赖物理模拟的几何体外观。 我们的主要贡献是一个有物理启发性的外观的外观网络, 能够产生具有视觉和动态的影面面面的影面影面的外观效果的外观效果,甚至对着影效果的衣着色的外观效果,甚至对隐观的外观的外观进行着色分析。 我们用着观的外观的外观进行着地展示了一些的面的面面面面面面面面面的面面面的面的面的面的面的面的面的面的面的面观,对面观,对着观,对面的面观,对面的面的面观,对面的面的面的面的面的面的面的面的面的面的面观,对面的造型进行一些面,对面,对面的面的面的造型进行着观,对面的面的面的面,对面的面的造型进行。