Drawing images of characters with desired poses is an essential but laborious task in anime production. Assisting artists to create is a research hotspot in recent years. In this paper, we present the Collaborative Neural Rendering (CoNR) method, which creates new images for specified poses from a few reference images (AKA Character Sheets). In general, the diverse hairstyles and garments of anime characters defies the employment of universal body models like SMPL, which fits in most nude human shapes. To overcome this, CoNR uses a compact and easy-to-obtain landmark encoding to avoid creating a unified UV mapping in the pipeline. In addition, the performance of CoNR can be significantly improved when referring to multiple reference images, thanks to feature space cross-view warping in a carefully designed neural network. Moreover, we have collected a character sheet dataset containing over 700,000 hand-drawn and synthesized images of diverse poses to facilitate research in this area. Our code and demo are available at https://github.com/megvii-research/IJCAI2023-CoNR.
翻译:在动画制作中,为角色绘制所需的姿势图像是一项不可或缺但繁琐的任务。协助艺术家进行创作是最近几年的研究热点。在本文中,我们提出了协作神经渲染(Collaborative Neural Rendering,CoNR)方法,该方法可从几个参考图像(也称为角色表格)创建指定姿势的新图像。总的来说,动漫角色的多样化发型和服装使得像SMPL这样的通用身体模型的使用变得困难,该模型适用于大多数裸体人形。为了解决这个问题,CoNR使用一种紧凑且易于获取的地标编码,避免在流水线中创建统一的UV映射。此外,当参考多个参考图像时,通过精心设计的神经网络中的特征空间交叉视图变形,CoNR的性能可以显着地提高。此外,我们收集了一个包含超过70万个手绘和合成图像的角色表格数据集,以促进此领域的研究。我们的代码和演示可在https://github.com/megvii-research/IJCAI2023-CoNR获得。