We consider the problem of human deformation transfer, where the goal is to retarget poses between different characters. Traditional methods that tackle this problem require a clear definition of the pose, and use this definition to transfer poses between characters. In this work, we take a different approach and transform the identity of a character into a new identity without modifying the character's pose. This offers the advantage of not having to define equivalences between 3D human poses, which is not straightforward as poses tend to change depending on the identity of the character performing them, and as their meaning is highly contextual. To achieve the deformation transfer, we propose a neural encoder-decoder architecture where only identity information is encoded and where the decoder is conditioned on the pose. We use pose independent representations, such as isometry-invariant shape characteristics, to represent identity features. Our model uses these features to supervise the prediction of offsets from the deformed pose to the result of the transfer. We show experimentally that our method outperforms state-of-the-art methods both quantitatively and qualitatively, and generalises better to poses not seen during training. We also introduce a fine-tuning step that allows to obtain competitive results for extreme identities, and allows to transfer simple clothing.
翻译:我们考虑的是人类变形转移问题,其目标在于在不同字符之间重新定位。处理该问题的传统方法要求明确界定变形,并使用这一定义在字符之间转移。在这项工作中,我们采取不同的做法,将一个字符的身份转换为新的身份,但不改变字符的构成。这提供了无需界定3D人构成的等值的优势,因为3D人构成的等值不是直截了当的,因为其构成的变化取决于其性能的特性,而且其含义是高度背景的。为了实现变形,我们提议了一个仅对身份信息进行编码的神经编码脱coder-decoder结构,并使用该定义将字符转换为字符。在这项工作中,我们采用不同的方法,例如异形-异形形状的特征,来代表身份特征。我们模型使用这些特征来监督对变形的变形与转移结果的抵消预测。我们实验性地显示,我们的方法在定量和定性上都超越了状态方法,而概括性地表明,只有以不易变形的方式,在培训过程中,我们还可以引入一种竞争性的转变结果,以便获得极端身份。