We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging. In contrast to previous attempts to learn pose transformations on fixed or topology-equivalent skeleton templates, our method focuses on a novel scenario to handle skeleton-free characters with diverse shapes, topologies, and mesh connectivities. The key idea of our method is to represent the characters in a unified articulation model so that the pose can be transferred through the correspondent parts. To achieve this, we propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose. Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes. It generalizes well to unseen stylized characters and inanimate objects. We conduct extensive experiments and demonstrate the effectiveness of our method on this novel task.
翻译:我们展示了第一个在不设骨骼操纵的情况下在立体3D字符之间自动传输的方法。 与以前尝试学习的尝试相比, 在固定或表面等同的骨架模板上进行变换, 我们的方法侧重于一个新颖的设想, 处理外形、 地形和网状连接的无骨字符。 我们的方法的关键理念是在一个统一的表达模型中代表字符, 以便通过通讯器部件传输。 为了实现这一点, 我们提议了一个新颖的外形转换网络, 预测字符剥皮重量和变形, 以联合表达目标字符来匹配想要的外形。 我们的方法是用半监督的方式吸收所有现有字符数据, 配对/ 未配置的外形和星状形状。 它非常概括地代表了看不见的外形字符和无动性对象。 我们进行了广泛的实验, 并展示了我们在这个新任务上的方法的有效性 。