Virtual telepresence is the future of online communication. Clothing is an essential part of a person's identity and self-expression. Yet, ground truth data of registered clothes is currently unavailable in the required resolution and accuracy for training telepresence models for realistic cloth animation. Here, we propose an end-to-end pipeline for building drivable representations for clothing. The core of our approach is a multi-view patterned cloth tracking algorithm capable of capturing deformations with high accuracy. We further rely on the high-quality data produced by our tracking method to build a Garment Avatar: an expressive and fully-drivable geometry model for a piece of clothing. The resulting model can be animated using a sparse set of views and produces highly realistic reconstructions which are faithful to the driving signals. We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application, where a garment is being reconstructed from two views, and a user can pick and swap garment design as they wish. In addition, we show a challenging scenario when driven exclusively with body pose, our drivable garment avatar is capable of producing realistic cloth geometry of significantly higher quality than the state-of-the-art.
翻译:虚拟视线是在线通信的未来。 服装是个人身份和自我表现的一个重要部分。 然而, 注册服装的地面真实数据目前无法在要求的分辨率和准确度中找到, 用于培训远程观光模型的准确性, 以进行现实的布料动画。 在这里, 我们提议了一条端到端的管道, 用于为服装建造可操作的显示器。 我们的方法核心是一个多视角模式的布料跟踪算法, 能够非常精确地捕捉畸形。 我们还依赖我们通过跟踪方法产生的高质量数据来构建一个服装装饰: 一种直观和完全可驾驶的几何型服装模型。 由此形成的模型可以使用稀少的一组观点进行动动画, 并产生非常现实的、 符合驱动信号的重建。 我们用现实的虚拟视线应用程序展示了我们的管道的功效, 正在从两种观点中重建成衣, 用户可以随意挑选和交换服装设计。 此外, 我们展示了一种富有挑战性的情景, 我们的调制衣物能够产生比质量高得多的现实的布料。